it’s an “open field” • TechCrunch

0

If you closely follow the progress Open AIthe company run by Sam Altman whose neural networks can now write original text and create original images with amazing ease and speed, you can just skip this piece.

On the other hand, if you’re vaguely interested in the company’s progress and the growing momentum that other so-called AI companies are suddenly gaining and want to better understand why, you might benefit from this interview with James Currier, a five-time founder and now a venture capitalist and co-founder of the company. NFX Five years ago with several of his founding friends.

Currier falls into the camp of people who follow progress closely — so much so that NFX has made several related investments in “generative technology” as he calls it, and it’s getting more and more of the team’s attention each month. In fact, Currier doesn’t think the fuss about this new wrinkle on AI is so much the hype as the realization that the broader startup world is suddenly facing a very big opportunity for the first time in a long time. “Every 14 years, we get one of these Cambrian eruptions. We had one around the internet in ’94. We had one around cell phones in 2008. Now we have another one in 2022,” Currier says.

In the past, this editor wishes she’d asked better questions, but I’m learning here too. Excerpts from our chat follow, edited for length and clarity. You can listen to our long talk over here.

TC: There is a lot of confusion around generative AI, including exactly how new it is, or whether it has just become the latest buzzword.

Jenny: I think what happened to the AI ​​world in general is that we had a feeling that we could have a specific AI, which would help us determine the truth of something. For example, is this a broken piece on the manufacturing line? Is this an appropriate meeting? It’s where you identify something using artificial intelligence in the same way that a human determines something. That’s pretty much what AI has been like for the past 10 to 15 years.

Other groups of algorithms in AI were more of these propagation algorithms, which were meant to look at huge sets of content and then create something new from them, saying, “Here are 10,000 examples. Can we create 1001 similar examples?”

It was very fragile, very fragile, up to about a year and a half. [Now] Algorithms have improved. But most importantly, the groups of content we were looking for got bigger because we have more processing power. So what happened is that these algorithms obey Moore’s Law – [with vastly improved] Storage, bandwidth, computation speed – and suddenly they’re able to produce something very similar to what a human produces. This means that the face value of the text he will write, and the face value of the drawing he will draw, looks very similar to what a human would do. This all happened in the past two years. So it’s not a new idea, but it’s fairly recent. That’s why everyone looks at this and says, “Wow, that’s magic.”

So it was computing power that suddenly changed the game, not part of the previously lost technical infrastructure?

She did not suddenly change, but rather gradually changed until the quality of her generation reached where it was meaningful to us. So the answer in general is no, the algorithms were very similar. In these propagation algorithms, it got somewhat better. But it is about processing power. Then, about two years ago, [powerful language model] GPT came out, which was a kind of local account, then GPT3 came in place [the AI company Open AI] you will do [the calculation] you in the cloud. Since the data models were much larger, they needed to do this on their own servers. You just can’t do that [on your own]. And at that point, things really jumped out.

We know because we invested in a company Doing AI-based generative games, including “AI Dungeon”, I believe the vast majority of all GPT-3 computations came through “AI Dungeon” at some point.

Does AI Dungeon require a smaller team than another game maker might?

This is definitely one of the big advantages. They don’t have to spend all that money storing all that data, and they can, with a small group of people, produce dozens of gaming experiences that all benefit from that. [In fact] The idea is that you’ll add generating AI to older games, so that your non-player characters can say something more interesting than they do today, even though you’ll get completely different gaming experiences from AI in games, versus adding AI to existing games. .

So is there a significant change in quality? Will this technology stabilize at some point?

No, it will always be incrementally better. It’s just that the differences in increments will be smaller over time because they’ve gotten really good,

But the other big change was that Open AI wasn’t really open. They produced this amazing thing, but then it wasn’t open and it was very expensive. So the groups got together like AI الاستقرار stability And other people, and they said, “Let’s just make open source versions of this.” And at this point, the cost has gone down 100 times, in just the last two or three months.

These are not branches of Open AI.

Not only will all this generative technology be built on the Open AI GPT-3 model; This was only the first one. The open source community has now replicated much of their work, and they are likely eight months, and six months behind, in quality. But it will get there. And because the open source versions are a third, five or twenty of the cost of Open AI, you’ll see a lot of price competition, and you’ll see a proliferation of these models that compete with Open AI. And you’ll likely end up with five, six, eight, or maybe a hundred of them.

Then on top of that, unique AI models will be built. So you might have an AI model really looking at making hair, or AI models really looking at how to make visuals of dogs and dog hair, or you’d have a model that really specializes in writing sales emails. You will have a whole layer of these specialized AI models which will then be built. Then on top of that, you’ll have all the generative techniques, which will be: How do you get people to use the product? How do you get people to pay for the product? How do you get people to log in? How do you get people to share it? How do you create network effects?

Who makes money here?

The application layer where people will go after distribution and network effects is where you will make money.

What about large companies that will be able to incorporate this technology into their networks. Wouldn’t it be so difficult for a company that doesn’t have this advantage to come out of nowhere and make money?

I guess what you’re looking for is something like Twitch where YouTube could have incorporated that into their model, but they haven’t. And Twitch has created a new platform and a valuable new piece of culture and value for investors and founders, although it can be difficult. So you will have great founders who will use this technology to give them an edge. This will create a gap in the market. And while the big guys do other things, they will be able to build multibillion dollar companies.

I ran the New York Times a Piece Recently there have been a few designers who have said that the generative AI applications they use in their respective fields are tools in a broader toolbox. Are people naive here? Are they at risk of being replaced by this technology? As I mentioned, the team working on “AI Dungeon” is smaller. This is good for the company but bad for developers who might have worked on the game otherwise.

I think with most technologies, there is some kind of unease that people feel [for example] Robots that replace a job in a car factory. When the Internet came, many people doing direct mail felt threatened that companies would be able to sell directly and not use paper advertising services. But [after] They have embraced digital marketing, or digital communication via email, and may have faced huge hurdles in their career, their productivity has gone up there, speed and efficiency has gone up. The same thing happened with credit cards online. We didn’t feel comfortable putting credit cards online until perhaps 2002. But those who have embraced it [this wave in] 2000 to 2003 was a better performance.

I think what is happening now. Writers, designers, and architects who think ahead and embrace these tools to give themselves a 2x, 3x, or 5x productivity boost will do incredibly well. I think the entire world will end up over the next 10 years experiencing an increase in productivity. It’s a huge opportunity for 90% of people to do more, achieve more, achieve more, and connect more.

Do you think it was a slip on the part of Open AI not to do it [open source] What was he building, given what he grew up around?

The leader ends up behaving differently than the follower. I don’t know, I’m not inside the company, I can’t really say. What I do know is that there is going to be a large ecosystem of AI models, and it’s not clear to me how the AI ​​model remains different because they all come close to the same quality and it just becomes a price game. It seems to me that the people who won were Google Cloud and AWS because we’re all going to be building things like crazy.

It could be that Open AI ends up moving up or down. Maybe they become like AWS themselves, or maybe they start making specialized AI systems that they sell to certain segments. I believe that everyone in this space will have the opportunity to do well if they navigate properly; They will just have to be smart about it.

NFX has a lot on their site about generative artificial intelligence By the way this is worth reading. You can find it here.

Leave A Reply

Your email address will not be published.