Artificial intelligence, for all its wonders, is beginning to obtain a bad reputation among consumers. AI chatbots are inclined to hallucinate – In other words, invent the answers when they do not know how to react, presenting with confidence incorrect information as if it were a fact. The redesign of the research on Google AI has become so bad that Google had to admit that it did not mean To advise users to put glue on pizza or eat rocksand later made the functionality back down For certain research requests after its many errors. Microsoft Recording function fed by AIRemember, will be now default After safety researchers Found regarding defects.
In this environment, the launch of an iPhone powered by AI could be considered a risk.
But with iOS 18which Apple showed WWDC 2024The company adopts a more cautious approach: instead of trying to overwhelm users with too many AI features to countThe Cupertino technology giant thoroughly deploys AI where he thinks he could be useful. This means that technology will not be included where it could be a great threat to the experience of consumers carefully designed to use an Apple device.
Not only is the Apple brand change AI to “Apple Intelligence” For its needs, but it integrates the new features of AI in iOS 18 in a more practical way.
Apart from a few more stupid additions like Ai EmojiApple Intelligence arrives at Everyday applications and featuresWith additions such as writing help and rereading tools, AI summaries and transcriptions, priority notifications, intelligent responses, best research, photo modification, Siri smarterAnd a “not disturbing” version which automatically understands what important messages must pass, among others.

Combined, these features may not be as exciting as a chatbot as a chatgpt which can answer almost all questions, putting a world of knowledge, scraped from the Internet, at hand. These features are not breathtaking either, but responsible for controversy, such as tools that allow you to create photos of AI in the style of any artist.
Instead, Apple has defined the table issues for a device supplied by AI should be able to do.
For the moment, this means that it should be able to help you understand what is important from long text bodies, whether notes, emails, documents or many, many notifications. It should be able to facilitate the search for things using requests in natural language, including what is in your photos. It should be able to transcribe audio, find your grammatical and spelling errors, rewrite text in different styles and suggest common answers. It should be able to make basic photo changes, such as deleting unwanted objects or people from your images. And he should be able to make images on request, but with serious railings in place.

Presented in this way, some of the new features of Apple Intelligence do not even look like AI, they simply feel as smarter tools.
It is an intentional movement on the part of Apple. The company claims that it focuses on use cases where it could identify specific problems which are much more resolble, rather than managing the complications that accompany work with an AI chatbot. By narrowing its objective, Apple is more guaranteed to provide users with expected results, not hallucinations, and can limit the dangers and security problems that come from the abuse of AI and rapid engineering.
In addition, Apple AI Corporate carefully offering advice to the end user and being an independent source of creation – the latter which does not necessarily delight creators, a large demography for Apple products. If you want to make your writing more concise or summarize an email, Apple Intelligence can help you. If you want to withdraw a quick response from an email, a suggested response can also be useful here. If, however, you want to create a whole story at bedtime from the air, Apple will offer you the possibility of asking for chatgpt help With this instead.

Regarding the creation of images, the company follows a similar path. You can use Apple Intelligence to create images while sending an SMS to a friend, but the functionality is based on its understanding of the people and subjects of your conversation – where, probably, that will not encourage you to make an IA image if you send SMS on explicit or inappropriate subjects. The same goes for the addition of images to other applications such as Keynote, Pages and Freeform. Even in Image Playground, a new autonomous application of Image AI, you are guided to suggestions and you are limited to selecting styles. You cannot do photorealist depths with the Apple application, in other words.
If you want to ask Siri a question to which he does not have the answer, he can offer you to change your Chatppt (With your consent). In this way, you can explore the wider world of chatbots and all the many answers it provides, if you choose. But when Chatgpt inevitably gets screwed up, the error is on this, not on Apple.
In fact, a large part of what Apple offers is not at all a way to “discuss” with an AI. Instead, it is a way to take advantage of AI for narrow use cases where a click of a button can transform text, or where AI intuitively knows what you should see: an urgent notification of your mom’s text, not a doordash coupon. AI is often in the background or on the side as a tool; This is not the main user interface to do what should be done.

This is where Apple Intelligence succeeds. It looks like a new layer of your existing applications, where it solves daily problems (or maybe just letting you have fun with emoji); He does not try to take control of the world, the way experts and The late Openai leaders keep warn us that AI will eventually do it. Apart from a few features – like Genmoji, which is simply silly – Apple Intelligence is boring and practical. This is why it can really work.
Apple Intelligence will be launched in beta this fall.