Apple will present a number of artificial intelligence features at the World Wide Developers Conference (WWDC) on June 10, Bloomberg reports.

Apple plans to expand AI capabilities on iPhone, Mac, and iPad using its own large-scale language models. In addition, the company plans to implement cloud-based AI services based on its own chips in its data centers.

This will allow the company to improve Siri’s voice capabilities, making the virtual assistant more conversational, and introduce features aimed at helping users with everyday tasks – a concept Apple calls “proactive intelligence.”

These new services will include automatic message summarization, quick summaries of news articles, and transcription of voice notes. Apple will also improve existing features such as automatic calendar filling and app suggestions. In addition, AI-based photo editing is planned to be improved, although it is unlikely to surpass the capabilities of AI functions in Adobe applications.

So far, Apple’s plans to implement artificial intelligence do not include a chatbot. The company’s generative artificial intelligence technology is not yet sufficiently developed to present an analog of ChatGPT or Gemini, and Apple top managers are wary of potential problems with incorrect chatbot answers or copyright. However, the company recognizes the usefulness of this technology. To solve this problem, Apple has partnered with OpenAI to integrate ChatGPT into iOS 18, the next version of its iPhone software. The partnership, which will be announced at WWDC, calls for OpenAI, led by Sam Altman, to provide the necessary infrastructure for the influx of users expected later this year.

Despite this cooperation, Apple recognizes that to achieve long-term success in the field of artificial intelligence, it will have to develop its own chatbot and deeply integrate it into its ecosystem. For now, the combination of its own AI functions and partnership with OpenAI is considered sufficient.

Apple also considered licensing Google’s Gemini for iOS 18, but no agreements were reached as WWDC approached.

In addition, Apple has long considered creating its own search engine, potentially with a focus on privacy features similar to DuckDuckGo. Given the close relationship between search and AI, revisiting this idea could be beneficial, despite the significant revenue Apple receives for using Google’s default search on its devices. But developing its own AI-powered search engine could be more beneficial in the long run.

Moreover, the recent achievements of OpenAI and Google have created additional pressure on Apple. OpenAI’s GPT-4o model, which is capable of realistic conversations and customer service agent functions, as well as the integration of generative AI into Google search could be a good foundation for the future growth of these companies. In contrast, Apple’s internal AI features may not seem as revolutionary. Internal admissions show that Apple is playing catch-up, and there are fears that users may ignore the new features in iOS 18 because they won’t be impressive.

The problem is also that one of the advantages of OpenAI and Google is the rapid pace of introducing new features, which allows them to keep users’ attention. In contrast, improvements to Siri and other Apple features are slow. Apple’s annual iOS updates further limit its ability to keep pace with AI development. While the agreement with OpenAI and cloud services will help close the gap, there are no plans to speed up the iOS update cycle.

However, Apple is accelerating its hardware upgrades. The recent release of the iPad Pro with M4 chip promises significant improvements in AI processing, and by 2025, the M4 processor is expected to be installed in all Mac models. The upcoming iPhone 16 Pro, which will be unveiled in September, will also have an improved processor with faster AI task processing.

Apple’s upcoming announcements at WWDC will provide a clearer picture of the company’s AI strategy and its efforts to remain competitive in the rapidly evolving technological landscape.