Interconnected

Interconnected

Share this post

Interconnected
Interconnected
($) Llama 3.1 and Its Geopolitical Implications
Copy link
Facebook
Email
Notes
More

($) Llama 3.1 and Its Geopolitical Implications

Output for training, synthetic data use, and more

Kevin Xu's avatar
Kevin Xu
Jul 24, 2024
∙ Paid
7

Share this post

Interconnected
Interconnected
($) Llama 3.1 and Its Geopolitical Implications
Copy link
Facebook
Email
Notes
More
2
Share

This is premium member only content. Thank you for being a paid subscriber of Interconnected Premium! If you aren’t yet part of our premium edition, I hope you join us!


Yesterday, Meta released its much-anticipated Llama 3.1 series of open AI models. Among them is a 405 billion parameter model that is by far the largest and most state-of-the-art (SOTA) frontier model that exists in the open-source world.

There are (and will be) many, many, many analyses and benchmarks comparing Llama 3.1 405B with the set of closed-source SOTA models – GPT4, Claude, Gemini, etc. I won’t add to that noise here. Instead, in this post, I want to highlight some notable improvements and changes in this model, shared in its model card and technical paper, that makes it arguably the most consequential development in the geopolitical dynamics and global competition of AI yet. 

Let’s dig in!

Allowing Output for Training

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Kevin Xu
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More