16 Comments

Thank you for sharing Chamath. Good read. I enjoyed most of it, but did feel it was biased in favor of grok without sufficient backing.

For example:

“Having access to this data in multiple formats such as images and audio can also help xAI’s model achieve a deeper and more nuanced understanding of the world.”

Doesn’t openAI, bard and others also have access to image and audio data?

Also you say grok has distribution advantage, but how is that any better than the distribution google may have through all their products or Meta with billions of users talking through its platforms?

I see grok as “different” and valuable, but the logic you provided hasn’t convinced me it has a true advantage VS others.

Expand full comment

Thanks for this! The race to AGI is getting wild with data, real-time smarts, and Grok's unique edge. Eager to see where this tech rollercoaster takes us next!

Expand full comment

A great complement to this article is this video from Andrej Karparthy, a leading engineer at OpenAI

https://youtu.be/zjkBMFhNj_g?si=m9bchYRu6Z-M_zRv

Expand full comment

Well written and explained.

We are currently contemplating sending our mental health AI to med school, any thoughts?

We buildt a MS mental health GPT.

Expand full comment

Great and helpful summary! Thanks for writing it and sharing. I need to try Grok

Expand full comment
Jan 31·edited Jan 31

First half gave me a good understanding of LLMs, second half was him shilling grok to me; which might have worked.

Thank you, it was a good read.

Expand full comment

Great article, simplifying a very complex topic. (Almost as good as my husband’s Blockchain 101)

For the non-grokers, here is my take:

I and my myriad business peers and friends, post on X. (Caveat that I have not posted in recent years) We all have nuanced opinions on subjects. Generally, we are not authors and have minimal or no footprint on the internet outside of social.

As such, X does provide the data for deeper learning.

Now the question is which social platform could provide the best data? X? (Run to the hills if someone is using TikTok to train their beast)

So now, there is increased competition to grow a social platform that fosters meaningful conversation and debate. 😎

Expand full comment

Wonderful. Shades of VLDB and ACID. Waitiing for the bias in the training to become evident, as it will, particularly as it is used in healthcare.

Expand full comment

Really well written! Enjoyed reading it :)

Expand full comment

Good grok puff piece disguised as LLM explainer. Somehow Chamath and co with all their “intellectual honesty” can find nothing wrong to say about Elon on pods. You are no different to the media that you criticise constantly.

Expand full comment

Very clear breakdown! Thanks for putting this together!!

Expand full comment

Thanks Chamath for sharing

Expand full comment

This is super well explained!

I'm enjoying Stanford's course on NLP with Deep Learning which is a useful introduction.

Expand full comment

Thank you for providing your insights, Chamath. I found your input to be informative and engaging. While I appreciated much of the content, I did perceive a bias towards Grok that, in my opinion, lacked adequate support.

Expand full comment

Enjoyed your visuals a great deal.

Using a graph analogy to describe how related words are mapped closely together in vector space is helpful for visualizing how word embeddings work.

This analogy simplifies the concept of high-dimensional vector spaces without losing meaning.

It clearly conveys that semantic similarity between words is reflected in their proximity within this space.

Expand full comment

And it will continue to self-confirm the hypotheses in the vectors: the vectors work, the percentages work, it all works. It is still layering Bayes rule: it is not invariance. Invariance is the only solution to AI, LLMs, or otherwise. And there are 'otherwises'.

Expand full comment