Viktorpavlov554 video 4 : Top 10 artificial intelligence predictions

 

  Hey guys, welcome to our channel.

This video is about the Top 10 artificial intelligence predictions.




1: GPT-4 will be released in the next couple of months—and yes, it will be a big deal.

Rumors have been flying recently about GPT-4, the next generation of OpenAI’s powerful generative language model.

Expect GPT-4 to be released early in the new year and to represent a dramatic step-change performance improvement relative to GPT-3 and 3.5. As manic as the recent hype around ChatGPT has been, it will be a mere prelude to the public reaction when GPT-4 is released. Buckle up.


Most of today’s leading language models were trained on data corpuses of about 300 billion tokens, including OpenAI’s GPT-3 (175 billion parameters in size), AI21 Labs’ Jurassic (178 billion parameters in size), and Microsoft/Nvidia’s Megatron-Turing (570 billion parameters in size).


2: We are going to start running out of data to train large language models.

It has become a cliché to say that data is the new oil. This analogy is fitting in one underappreciated way: both resources are finite and at risk of being exhausted. The area of AI for which this concern is most pressing is language models.

As we discussed in the previous section, research efforts like DeepMind’s Chinchilla work have highlighted that the most effective way to build more powerful large language models (LLMs) is not to make them larger but to train them on more data.

But how much more language data is there in the world? (More specifically, how much more language data is there that meets an acceptable quality threshold? Much of the text data on the internet is not useful to train an LLM on.)



3: For the first time, some members of the general public will begin using fully driverless cars as their day-to-day mode of transportation.

After years of premature hype and unfulfilled promises in the field of autonomous vehicles, something has happened recently that surprisingly few people seem to have noticed: truly driverless cars have arrived.

Today, as a member of the general public, you can download the Cruise app (it looks just like the Uber or Lyft app) and hail a driverless vehicle—with no one behind the wheel—to take you from Point A to Point B on the streets of San Francisco.



4: Midjourney will raise venture capital funding.

The three most prominent text-to-image AI platforms today are DALL-E from OpenAI, Stable Diffusion from Stability AI (and other contributors), and Midjourney.

OpenAI raised $1 billion from Microsoft in 2019 and is currently in talks to raise billions more. Stability AI raised $100 million a few months ago and is already seeking to raise more.

Midjourney, by contrast, has spurned all outside funding. The company’s usage and growth have been astonishing: as of this writing, it has nearly 6 million users and substantial revenues. Yet according to its website, Midjourney remains a “small self-funded” organization with only 11 full-time team members.


Yet faced with the demands of blistering growth, intensifying competition, and a massive market opportunity, we predict Holz will give in and raise a large funding round for Midjourney in 2023. Otherwise, the company risks being left behind in the generative AI gold rush that it helped usher in.


5: Search will change more in 2023 than it has since Google went mainstream in the early 2000s.

Search is the primary means by which we navigate and access digital information. It lies at the heart of the modern internet experience.

Today’s large language models can read and write with a level of sophistication that a few years ago would have seemed inconceivable. This will have profound implications for how we search.

In the wake of ChatGPT, one reconceptualization of search that has gotten a lot of attention is the idea of conversational search. Why enter a query and get back a long list of links (the current Google experience) if you could instead have a dynamic conversation with an AI agent in order to find what you are looking for?



Search has changed surprisingly little since Google’s ascendance during the dot-com era. Next year, thanks to large language models, this will begin to change dramatically.


6: Efforts to develop humanoid robots will attract considerable attention, funding and talent. Several new humanoid robot initiatives will launch.

The humanoid robot is perhaps the definitive symbol of Hollywood’s exaggerated, dramatized depiction of artificial intelligence (think Ex Machina or I, Robot).

Well, humanoid robots are fast becoming a reality.

Why build robots shaped like humans? For the simple reason that we have architected much of the physical world for humans. If we plan to use robots to automate complex activities in the world—in factories, shopping malls, offices, schools—the most effective approach is often for those robots to have the same form factor as the humans that would otherwise be completing those activities. This way, robots can be deployed in diverse settings with no need for the surrounding environment to be adapted.

 Similar to autonomous vehicles circa 2016, waves of talent and capital will start pouring into the field next year as more people come to appreciate the scale of the market opportunity.


7: The concept of “LLMOps” will emerge as a trendy new version of MLOps.

When a major new technology platform emerges, an associated need—and opportunity—arises to build tools and infrastructure to enable this new platform. Venture capitalists like to think of these supporting tools as “picks and shovels” (for the upcoming gold rush).


We predict the term “LLMOps” will catch on as a shorthand to refer to this new breed of AI picks and shovels. Examples of new LLMOps offerings will include, for instance: tools for foundation model fine-tuning, no-code LLM deployment, GPU access and optimization, prompt experimentation, prompt chaining, and data synthesis and augmentation.


8: The number of research projects that build on or cite AlphaFold will surge.

DeepMind’s AlphaFold platform, first announced in late 2020, solved one of life’s great mysteries: the protein folding problem. AlphaFold is able to accurately predict the three-dimensional shape of a protein based solely on its one-dimensional amino acid sequence, a landmark achievement that had eluded human researchers for decades. (We have previously argued in this column that AlphaFold represents the single most important achievement in the history of artificial intelligence.)


In 2023, expect the volume of research built on top of AlphaFold to surge. Researchers will take this vast new trove of foundational biological knowledge and apply it to produce world-changing applications across disciplines, from new vaccines to new types of plastics.


9: DeepMind, Google Brain, and/or OpenAI will undertake efforts to build a foundation model for robotics.

The term “foundation model,” introduced last year by a team of Stanford researchers, refers to a massive AI model trained on broad swaths of data that, rather than being built for a specific task, can perform effectively on a wide range of different activities.


What would it mean to build a foundation model for robotics—in other words, a foundation model for the physical world? At a high level, such a model might be trained on troves of data from different sensor modalities (e.g., camera, radar, lidar) in order to develop a generalized understanding of physics and real-world objects: how different objects move, how they interact with one another, how heavy or fragile or soft or flexible they are, what happens when you touch or drop or throw them. This “real-world foundation model” could then be fine-tuned for particular hardware platforms and particular downstream activities.


10: Many billions of dollars of new investment commitments will be announced to build chip manufacturing facilities in the United States as the U.S. makes contingency plans for Taiwan.

Artificial intelligence, like human intelligence, depends upon both software and hardware. Certain types of advanced semiconductors are essential to power modern AI. By far the most important and widespread of these are Nvidia’s GPUs; players like AMD, Intel and a handful of younger AI chip upstarts are also seeking to enter the market.


This process is already underway. Two weeks ago, TSMC announced it would invest $40 billion to build two new chip manufacturing plants in Arizona. (President Biden visited the Arizona site in person to hail the announcement.) Importantly, the new TSMC plants—slated to begin production by 2026—will be capable of producing 3 nanometer chips, the most advanced semiconductors in the world today.

Expect to see more such commitments in 2023 as the U.S. seeks to derisk the global supply base for critical AI hardware.




What do you think of our video?

Let me know in the comment section below.

Before you go please hit the like button and subscribe to my channel.

Thank you for watching.

Comments

Popular posts from this blog

Babass video 5 : Lucky Luciano: The Father of Modern Organized Crime

Fernando video 1 : 20 Weird Things In The Old West You've Never Seen

david video 2 : Understanding Human Behavior