Monday at WWDC 2024Apple unveiled Apple Intelligence, its long -awaited thrust on the scale of the ecosystem in the generative AI. As the previous rumors have suggested, the new functionality is called Apple Intelligence (AI, get it?). The company has promised that the functionality will be built with basic safety, as well as highly personalized experiences.
“More importantly, it must understand you and be anchored in your personal context, such as your routine, your relationships, your communications, and more,” noted Tim Cook. “And of course, it must be built with private life from zero. All this goes beyond artificial intelligence. It is personal intelligence, and it is the next big step for Apple.”
The company has pushed functionality as an integral part of all its different offers of operating systems, including iOS, MacOS and the last, Visionos.

“It must be powerful enough to help you with the things that matter most for you,” said Cook. “It must be intuitive and easy to use. It must be deeply integrated into your product experiences. More importantly, he must understand you and be anchored in your personal context, such as your routine, your relationships, your communications and more and of course, it must be constructed with confidentiality from the ground. Together. Together.
The SVP Craig Federighi added: “Apple Intelligence is based on your data and your context.” The functionality will effectively rely on all personal data that users enter into applications such as Calendar and Maps.
The system is built on large models of language and intelligence. A large part of this treatment is carried out locally according to the company, uses the latest version of Apple Silicon. “Many of these models work entirely on the device,” said Federighi during the event.
That said, these consumption systems still have limits. As such, part of the heavy work should be done on the device in the cloud. Apple adds a private cloud calculation to the offer. The back-end uses services that execute Apple chips, in order to increase confidentiality for this highly personal data.
Apple Intelligence also understands what is probably the Largest update in Siri Since it was announced more than a decade ago. Society affirms that functionality is “more deeply integrated” in its operating systems. In the case of iOS, this means exchanging the familiar Siri icon for a brilliant blue border that surrounds the office during its use.
Siri is no longer a voice interface. Apple also adds the possibility of typing requests directly into the system to access its generative intelligence based on AI. It is a recognition that voice is often not the best interface for these systems.

Application intentions, on the other hand, provide the possibility of integrating the assistant more directly into different applications. It will start with first party applications, but the company will also open access to third parties. This addition will considerably improve the types of things that Siri can do directly.
The offer will also open multitasking in a deep way, allowing a kind of cross compatibility. This means, for example, that users will not have to continue to switch between the calendar, the mail and the cards in order to plan the meetings, for example.
Apple Intelligence will be integrated into most business applications. This includes things such as the possibility of helping to compose messages inside the mail (as well as third -party applications) or simply to use smart answers to respond. This is in particular a functionality offered by Google for some time now in Gmail and has continued to build using its own generative AI model, Gemini.
The company is even Bring functionality to emojis with Genmoji (Yes, that’s the name). The functionality uses a text field to create personalized emojis. Image Playground, meanwhile, is a disk image generator that is integrated into applications such as messages, keynote, pages and free shape. Apple also brings an autonomous image image application to iOS and opens access to the offer via an API.

The image baguette, on the other hand, is a new tool for Apple Pencil which allows users of the text to create an image. Indeed, it is Apple’s taking on the Google circle to search, focusing only on images.
Research was also built for content such as photos and videos. The company promises more natural research within these applications. The Genai models also facilitate the construction of slideshow inside the photos, again using invites in natural language. Apple Intelligence will be deployed to the latest versions of its operating systems, including iOS and iPados 18, MacOS Sequoia And Visionos 2. It is available for free with these updates.
The functionality will arrive on iPhone 15 Pro and M1 Mac and iPad devices. The standard iPhone 15 will not get functionality, probably due to the limits of the chip.
As expected, Apple has also announced a partnership with Openai which brings Chatgpt to offers like Siri. The GPT 4.0 functionality Use the image and text generation of this company. The offer will give users access without having to register for an account or pay costs (although they can always go to the premium).
This happens for iOS, iPados and MacOS later this year. The company says it will also bring integration into other LLM thirds, but has not offered a lot of details. It seems likely that Google Gemini are at the top of this list.