Navigation background glow

Illia Polosukhin On Inventing The Tech Behind Generative AI At Google

28 Jun 2024 (10 months ago)
Illia Polosukhin On Inventing The Tech Behind Generative AI At Google

Transformer Architecture

  • The Transformer paper, co-authored by Illia Polosukhin and others at Google in 2017, introduced the concept of transformers, which are now widely used in generative AI.
  • Transformers are named based on their ability to transform input into the desired output.
  • The idea for transformers came from Jakob Uszkoreit, who proposed using an attention mechanism to answer questions by reading the entire document in parallel rather than sequentially.
  • Polosukhin quickly built a prototype of the transformer architecture, which became the foundation for further development and refinement.

Generative Pre-trained Transformer (GPT)

  • The Generative Pre-trained Transformer (GPT) model builds on the transformer architecture by predicting the next word in a sequence, leading to impressive results in learning about the world and reasoning.

Near Protocol

  • Near Protocol was initially created as a solution for paying computer science students around the world who were helping to gather training data for Near AI.
  • Google was not directly involved in the creation of Near Protocol, but there have been talks with Google's VC arm from time to time.

Open-Sourcing and Democratization of AI

  • Open-sourcing the transformer concept made sense from a research perspective, as it allowed others to build upon and leverage it as a platform.
  • OpenAI took a risk by open-sourcing ChatGPT and was rewarded for it, gaining brand recognition and a first-mover advantage.
  • Democratization of AI models is necessary, allowing individuals to choose and train models based on their preferences.

Challenges and Concerns

  • Google's recent AI product releases have faced challenges due to the statistical nature of these models, which reflect the data they are trained on.
  • The concept of "sleeper agents" in AI models raises concerns about potential malicious code injection or bias introduction, emphasizing the importance of open-source training data and processes.
  • Profit-driven companies may optimize their AI models for revenue generation, which could lead to a focus on increasing user time on the platform rather than user benefit.

Future of AI

  • AI will continue to advance, with improvements in reasoning, training efficiency, and application to personal and corporate data while preserving privacy.
  • The Doomsday scenario of AI eliminating humans is unlikely as AI systems have specific goals and are not inherently destructive.
  • The more realistic concern is addiction to dopamine-driven entertainment systems, hindering personal growth and intellectual development.
  • Addressing confirmation bias is crucial to ensure diverse perspectives and convergence of information.
Pricing CTA

Summarize anything forget nothing