Google and Alphabet CEO Sundar Pichai recently said that more than a quarter of all new Google code is now created using artificial intelligence. This approach significantly speeds up development processes: AI generates code, which is then checked and approved by the company's engineers. Not only does this increase productivity, Pichai noted, it also allows engineers to focus on more complex tasks, moving work forward faster.
The use of generative models for writing code is not new in itself, but its application at this level can reduce the number of positions for programmers. For example, the startup Cognition Labs recently developed its own AI engineer named Devin, who is able to carry out engineering projects almost autonomously. ChatGPT has previously demonstrated a level sufficient to pass Google's Level 3 technical interview, which also confirms the growing capabilities of AI in programming.
However, the massive use of AI to write code also carries risks. If the AI is trained on licensed or deprecated code, copyright issues or security vulnerabilities may arise. Some companies already using AI code have faced cyber threats and disruptions, underscoring the importance of human control.
Google continues to implement AI solutions in its products and platforms, including improved AI reviews in search results and new features on YouTube. The next step in the development of AI for the company will be the release of the Gemini 3.0 model, scheduled for December. Gemini 3.0 can be an autonomous agent that can control a computer: open programs, perform clicks and even complete projects without human intervention. If such a model is implemented, Google will be able to delegate more tasks to its AI models, which will further strengthen its technological advantage.
However, such systems have their limitations. AI can generate errors ("hallucinations") or be prone to deception, which makes it necessary to observe strict security measures and constant monitoring by experts.