The 10 Trillion Parameter AI Model With 300 IQ
02 Nov 2024 (20 days ago)
Coming Up (0s)
- The concept of "o1" being magical raises questions about its implications for Founders and Builders, with one argument suggesting it could be detrimental to Builders as Open AI might capture all the value with its powerful technology (3s).
- This scenario implies that Open AI would capture a "light cone" of all present, past, and future value, potentially dominating the market (15s).
- A more optimistic alternative scenario suggests that with o1 becoming more deterministic and accurate, Founders can focus on higher-level tasks, such as building the best user experience, rather than spending time on getting prompts to work correctly and outputs to be accurate (30s).
- In this optimistic scenario, the winners would be those who build the best software, focusing on details that create a seamless user experience (38s).
What models get unlocked with the biggest venture round ever? (54s)
- OpenAI has raised the largest venture round ever, $6.6 billion, which will be used for compute, talent, and normal operating expenses, with a focus on scaling up their models (55s).
- The next model is expected to be an order of magnitude bigger, with 10 trillion parameters, two orders of magnitude out from the current state-of-the-art (1m39s).
- The current frontier models have roughly 500 billion parameters, such as LLaMA, Anthropic, and GPT-4, and getting to 10 trillion parameters would be a significant leap (2m7s).
- This leap could be similar to the one seen from GPT-2 to GPT-3.5, which was a two-order magnitude increase and led to a new flourishing era of AI companies (2m30s).
- If this scaling law continues, it could lead to a similar feeling of transition and wealth creation, as seen in 2023 when companies started building on top of GPT-3.5 (3m24s).
- The current state-of-the-art models already rival normal intelligence and can perform tasks with 90-98% accuracy, similar to a human knowledge worker with 120 IQ (3m50s).
- At 10 trillion parameters, the models could reach 200-300 IQ, unlocking new capabilities beyond what a normal human being can do, and potentially leading to breakthroughs in various fields (4m35s).
- There are examples of this happening in human history, such as the development of nuclear power, where theoretical modeling unlocked new capabilities (5m14s).
- The article in The Atlantic featuring Terence Tao, a Taiwanese mathematician with an IQ north of 200, shows how ChatGPT is already unlocking new capabilities for him (4m52s).
Some discoveries take a long time to actually be felt by regular people (5m35s)
- The impact of AI is not yet evenly distributed, and many people do not feel its effects in their daily lives, but some discoveries take time to pan out and have a significant impact (5m36s).
- The example of the Fourier transform, a mathematical representation discovered by Joseph Fourier in the 1800s, is given as a counterexample, which took 150 years to be fully utilized and have a significant impact on the world (6m0s).
- The Fourier transform was initially a seminal thesis that represented series of functions that were repeating in periods, and it was later found to be super good at representing signals, which is crucial for representing everything in the analog world in a digital format (6m5s).
- The Fourier transform has numerous applications, including radio waves, telecommunication, image representation, encoding, and information theory, which have unlocked much of the modern world, including the internet and cell towers (7m7s).
- The average person could feel the impact of the Fourier transform 150 years after its discovery, and it is interesting to consider how long it will take for the average person to feel the impact of current AI developments (7m35s).
- The development of color TV in the 1950s is also attributed to the Fourier transform, and it is possible that current AI developments will have a similar impact in the future (7m51s).
- The start of the clock for current AI developments is unclear, but it is possible that we are already decades into this development and are just starting to hit the inflection moment (7m57s).
- The math underlying current AI developments, such as linear algebra, is over 100 years old, but the availability of GPUs has enabled the computation of complex models, which could potentially alter the face of what humans are capable of (8m27s).
- The impact of 10 trillion parameter models could be significant, potentially unlocking something about the nature of reality and our ability to model it, but it is unclear what the outcome will be (8m39s).
- The fact that current AI developments are in software, as opposed to physical devices, means that the technology can be adopted more quickly, and companies like Facebook and Google already have a significant percentage of the world using their software (9m13s).
- The development of consumer devices, such as Meta's Ray-Ban smart glasses, could be a significant moment for the widespread adoption of AI, enabling people to interact with AI in a more visual and conversational way (9m28s).
Distillation may be how most of us benefit (9m53s)
- There is a bifurcation in the expectations of what can be achieved with the capability of 10 trillion parameter models, with some people pushing the edge of understanding and others benefiting from distillation, which is taking a large model and making it smaller and cheaper to run (9m54s).
- There is evidence that large models like Meta's 405B are mostly used to make smaller models better, and this process of distillation is being used by companies to make their models more efficient (10m23s).
- OpenAI has enabled distillation internally to its own API, allowing users to distill large models like GPT-4 into smaller, cheaper models like GPT-4 Mini (10m51s).
- This process of distillation is becoming more popular, with companies choosing to use smaller models over larger ones, and startups are building their products on top of these smaller models (11m50s).
- The trend of using smaller models is continuing, with Claude going from 5% to 25% developer market share in just six months, and Llama going from 0% to 8% (13m11s).
- The use of smaller models is a good predictor of what successful companies will use, and therefore what products will be most successful (13m27s).
- OpenAI was losing market share to other models, but may be coming back with their latest model, with 15% of the batch already using it despite it being only two weeks old (14m13s).
o1 making previously impossible things possible (14m26s)
- A hackathon is being hosted to give YC companies early access to 01, with Sam himself kicking off the event, and already some teams have built things that were not possible before with any other model (14m28s).
- One of the companies, Freestyle, is building a cloud solution fully built with TypeScript and has already seen a demo of a version of Repet Agent working with their product after just a couple of hours of work (15m0s).
- The demo was able to reason and inference with the documentation and built a web app that writes a to-do list after being prompted with developer documentation and code (15m38s).
- There are two arguments about the impact of 01 on founders and builders: one is that it's bad for builders because Open AI will capture all the value, and the other is that it will make it easier for founders to build things by making the process more deterministic and accurate (15m57s).
- The more optimistic scenario is that founders will spend less time on tooling and getting prompts to work correctly and more time on building better UI, customer experience, sales, and relationships (16m25s).
- This could make it a better time to start building now, as the knowledge learned around getting prompts accurate and working may become less relevant as these models get more powerful (17m0s).
- A conversation with Jake Hel from Casex highlighted the challenges of getting the legal copilot to work to 100% accuracy, but with 01, it's possible that this process could become much easier (17m9s).
- If 01 can guarantee 100% accuracy from day one, the barrier to entry to build these things will go way down, leading to more competition and a more traditional winner-takes-all software market (17m28s).
- An example of a company, Dry Merch, that went from 80% accuracy to almost 100% using 01 and unlocked a bunch of new things, is cited as evidence of the potential of 01 (17m42s).
- There may be an even more bullish version of this scenario, which is that there are use cases right now that people are not able to build because of the limitations of current models (18m7s).
- Large language models (LLMs) are not yet accurate enough to be used in mission-critical jobs where the consequences of stakes are dire, but as they improve, they will become more suitable for such applications (18m13s).
- A company in the YC portfolio, which was not profitable and growing 50% a year, automated about 60% of their customer support tickets using LLMs and became cash flow break-even while still growing 50% year on year (18m40s).
- This scenario is considered a dream for building enterprise value, as the company is compounding its growth with no additional capital coming in, and it has the potential to become a billion-dollar revenue company driving hundreds of millions of dollars in free cash flow (19m12s).
- The current overhang moment in 2024, where companies raised too much money at high multiples, is actually good news for them as they can go from not profitable to break-even to potentially very profitable (19m50s).
- Cler, a company, got attention for pitching that they are replacing their internal systems of records for HR and sales with home-built LLM-created apps, including getting rid of Workday (20m29s).
- Treating OpenAI as the Google of the next 20 years, it is desirable to invest in OpenAI and the things it enables, similar to how Google enabled companies like Airbnb (20m52s).
- However, it is unlikely that OpenAI will create its own Airbnb-like company due to the inefficiency and difficulty of doing so, requiring too much domain expertise (21m7s).
- New companies, referred to as the "new Googles," are being built, utilizing vertical agents, with examples including Tax GPT, a company that started off providing tax advice and has now expanded to building an Enterprise business on document upload, offering a low-cost or free service initially and then securing high-value contracts (21m18s).
- Tax GPT began as a rapper and leveraged its ability to perform research and analysis on existing case law and policy documents from the IRS or internationally, gaining traction among tens of thousands of accountants and accounting firms (21m29s).
- The company's business model involves offering a low-cost or free service initially, then securing high-value contracts, with an average contract value (ACV) of $10,000 to $100,000 per year, which can significantly reduce the workload of accountants (22m3s).
- The development of new AI models, such as the 01 model, is a significant step forward, offering a major breakthrough for any program or engineer building AI applications, and it is unclear whether this will give Open AI a temporary lead in market share or a more permanent advantage (22m44s).
- The AI landscape is rapidly evolving, with multiple models, including Llama, Claude, and Gemini, continually improving, and it is uncertain whether Open AI's breakthrough will be replicable by others or if it will maintain its lead (23m11s).
- Open AI has consistently been at the forefront of innovation, making major breakthroughs, but has yet to maintain its lead in the market, and it remains to be seen whether its latest development will be a true breakthrough, defensible and non-replicable by others (23m36s).
o1 makes the GPU needs even bigger (23m47s)
- AAN makes GPU needs even bigger due to increased computation needs for inference, taking more time and changing the dynamics for companies building AI infrastructure (23m47s).
- There are two types of use cases: one where AAN can be used for difficult tasks and then distilled into A40 or A40 mini for relatively rote and repeating use cases, and another where the full AAN experience is needed for specific and detailed tasks (24m10s).
- The cost of using AAN can be passed on to customers if the company is an enterprise software and the customers can tolerate higher latency, but this may not be feasible for consumer apps (24m52s).
- Open AI's Lace releases, such as the real-time voice API, are remarkable and have ongoing usage-based pricing of $9 per hour, which could be a threat to countries that rely heavily on call centers (25m16s).
- The $9 per hour pricing for the real-time voice API is comparable to the cost of a call center, making it a potentially powerful tool that could disrupt the industry (25m35s).
Voice apps are fast growing (25m44s)
- A company that specializes in AI voice for deck collection has shown phenomenal growth and success, with its track record being incredibly well. (25m54s)
- Many voice apps in S24 have experienced explosive growth, marking a clear trend for that batch. (26m3s)
- In previous batches, companies working on voice apps faced challenges such as high latency, confusion with interruptions, and other issues, but it seems that these problems have been overcome. (26m18s)
- A company called Happy Robot has developed a voice agent for coordinating phone calls in logistics, helping to streamline the process for truck drivers and others in the industry. (26m28s)
- The voice agent has gained significant usage and is solving menial problems over the phone, demonstrating the capabilities of AI in passing Turing tests. (26m51s)
Incumbents aren’t taking these innovations seriously (27m5s)
- Many engineering teams in incumbent industries do not take innovations in AI seriously, with most companies having no initiative on this front, and this lack of initiative may be generational, with managers and VPs being less aware of the rapid improvements in AI (27m5s).
- The rate of improvement in AI is not well understood by those not closely involved in the field, with many people being cynical about its potential impact, similar to how they viewed the cloud, which took around a decade to change enterprise software (27m49s).
- Even people within the tech industry are surprised by how quickly AI is moving, with its improvement being the fastest of any tech, surpassing processors and the cloud (28m41s).
- Technical founders are using the latest coding tools, such as coding assistants, with many using Cursor, a tool that is more advanced than GitHub Co-Pilot, and half of the summer 24 Founders surveyed are using Cursor, compared to 12% using GitHub Co-Pilot (29m39s).
- The use of these tools is allowing founders to build more polished demos and products, with some using the latest agented coding agents, and this is evident in the hackathons, where founders are building impressive projects (30m5s).
- The advantage of using these tools is giving founders an edge, with Cursor being a notable example, having come out of nowhere and gained significant traction, with five times the size of GitHub Co-Pilot within the batch (30m33s).
- The adoption of these tools by startup founders is a good indicator of their potential success, with companies like Stripe and AWS having successfully targeted YC batches as early customers (30m52s).
- The competition between these tools is ultimately benefiting developers, with the winners being those who build the best user experience and get the details correct, and this is why Cursor can compete with GitHub Co-Pilot despite the latter's advantages (31m29s).
Ten trillion parameters (31m52s)
- A world with a 10 trillion parameter AI model could potentially lead to significant advancements in scientific and technological progress (31m53s).
- The current rate of progress is arguably limited by the number of smart people who can analyze the vast amount of existing information, including millions of scientific papers and data (32m16s).
- If AI models become smart enough to perform original thinking, deep analysis, and correct logic, they could potentially unlock a near-infinite amount of intelligence to process the vast amount of data and knowledge about the world (32m37s).
- This could lead to groundbreaking scientific discoveries, such as room temperature fusion, room temperature superconductors, time travel, and flying cars, which humans have not been able to invent yet (32m58s).
- With enough intelligence, it is possible that these inventions and discoveries could finally be made, leading to a potentially transformative future (33m9s).
- The concept being discussed is not just a simple tool, but something more complex and powerful, likened to a self-driving car or even a rocket to Mars, indicating its potential to greatly exceed human capabilities (33m15s).
- The idea is not just a basic aid for the mind, but rather a highly advanced technology that can significantly surpass human intelligence (33m18s).
- The possibilities of this technology are vast and potentially revolutionary, with the comparison to a rocket to Mars suggesting its ability to greatly accelerate progress and achieve groundbreaking advancements (33m24s).
- The discussion will be continued in a future video, with the current one coming to a close (33m27s).