You missed an important part of OpenAI history. Two of the lead AI developers left OpenAI to start Anthropic, which has billions invested from Google, Amazon, Salesforce, Zoom, etc. Now valued at almost $30B, it is the most important competitor to OpenAI. The founders left to create a safe AI alternative when OpenAI took investment dollars from Microsoft. I’m curious if the safety concerns that led to the founding of anthropic are similar to those cited by the board.
Well articulated, Chamath! You have done a lot of work of setting up and funding companies. How would you have struck that balance between the non-profit mission and for-profit side knowing they need a lot of capital to make it work? Just curious
What if circumstances or insights lead to the realization that the initially set goal needs to be changed during the journey, despite the effort already invested in achieving it?
I'm sympathetic to the statement in the post: "There is a key learning here. Whether you are a for-profit or non-profit entity, there are tried and true corporate structures to help you achieve your stated goal."
However, it's not clear to me that the existing methods of governing for-profit and non-profit startups are really the best ways of doing something, especially one as high stakes as creating AGI.
"Conclusion" did it for me highlighting the need to keep going and use existing structured avoiding use of unnecessary conflicting structures to accomodate social and economical goals.
Was the non-profit structure an optics move, a lean toward effective altruism? What was the OpenAI non-profit's mission? It doesn't really seem to have a real “mission”, unless it's a vehicle to shelter the “ultimate beneficiary,” a for profit entity that did a 10-figure investment deal with Microsoft. What was the purpose of the cap, and does it really mean anything? Does a third party actually control the cap? If so, the hybrid model seems flawed. The hallmark of the non profit is the
mission. If the mission was birthing AI while also limiting profits, why? Love to hear more on that.
Interesting. Without having an opportunity to read and understand the LLC agreement in detail, it would be difficult to make a judgement about the “structure” as illustrated.
You missed an important part of OpenAI history. Two of the lead AI developers left OpenAI to start Anthropic, which has billions invested from Google, Amazon, Salesforce, Zoom, etc. Now valued at almost $30B, it is the most important competitor to OpenAI. The founders left to create a safe AI alternative when OpenAI took investment dollars from Microsoft. I’m curious if the safety concerns that led to the founding of anthropic are similar to those cited by the board.
The King of SPACs speaks, albeit without any additional insight.
The Nature created the 'natural cell'. AI will create a 'cancer cell' in information. We MUST stop this kind of thinking.
Well articulated, Chamath! You have done a lot of work of setting up and funding companies. How would you have struck that balance between the non-profit mission and for-profit side knowing they need a lot of capital to make it work? Just curious
What if circumstances or insights lead to the realization that the initially set goal needs to be changed during the journey, despite the effort already invested in achieving it?
I'm sympathetic to the statement in the post: "There is a key learning here. Whether you are a for-profit or non-profit entity, there are tried and true corporate structures to help you achieve your stated goal."
However, it's not clear to me that the existing methods of governing for-profit and non-profit startups are really the best ways of doing something, especially one as high stakes as creating AGI.
Interesting read.
Nothing good comes easy.
Hope they work out the kinks in the org structure so we can focus on what truly matters AGI
Thanks, I'm looking forward to hear more about it on the next pod.
https://open.substack.com/pub/kbssidhu/p/sam-altman-not-history-yet-but-his?r=59hi9&utm_campaign=post&utm_medium=web
My take...there's already a talk of his being back.
"Conclusion" did it for me highlighting the need to keep going and use existing structured avoiding use of unnecessary conflicting structures to accomodate social and economical goals.
Was the non-profit structure an optics move, a lean toward effective altruism? What was the OpenAI non-profit's mission? It doesn't really seem to have a real “mission”, unless it's a vehicle to shelter the “ultimate beneficiary,” a for profit entity that did a 10-figure investment deal with Microsoft. What was the purpose of the cap, and does it really mean anything? Does a third party actually control the cap? If so, the hybrid model seems flawed. The hallmark of the non profit is the
mission. If the mission was birthing AI while also limiting profits, why? Love to hear more on that.
Interesting. Without having an opportunity to read and understand the LLC agreement in detail, it would be difficult to make a judgement about the “structure” as illustrated.
Thanks for the history recap. Your conclusion reads as if you’re against the idea of a capped profit hybrid structure. Even 100x?
Will read soon! Looks interesting
Thanks for sharing, curious to hear more about Reid Hoffman's exit due to "an investment conflict".
Why can’t they use AGI to predict human emotions and avoid these kind of situation, is that possible in future ?