Hard not to see the Regulator being held captive to the same type of special interests that blight all industry such as Sugar, Banking, Pharma.........
if you use FDA as a positive example, you are in a fool's paradise. Maybe you should use FAA.
FDA completely failed to ensure that the mrna vaccine companies do proper trials to analyze safety or even effectiveness. The mrna randomized controlled trials (the only reliable method of assessing causality) only showed the reduction in symptomatic PCR-positivity, not in more meaningful outcomes like actually reducing hospitalizations, overall deaths, or transmission: the claims FDA itself promoted without good evidence https://www.bmj.com/content/371/bmj.m4037 . Even after data showing that young men are more at risk after hospitalization after the mrna covid vaccine than covid itself, the FDA continued to promote those vaccines and their mandates.
and NAC, which is effective in many ways and has decades of proven safety trackrecord.
In summary, the FDA is more about protecting the financial interests of the pharma companies they have CoI with, much less about protecting consumers.
I do see a need for an agency that ensures that supplements/medicines have no contamination and have exactly what is stated in the label. But that could be done by market forces, such as "Amazon/ConsumerReports Certified Integrity" label
I agree with all your points. My question is-how do we actually get there? While you were at Facebook and admittedly took advantage of Section 230 and loose regulations, I assume you would have fought tooth and nail against these types of regulations being put in place. Up and coming AI entrepreneurs and companies with large interests in this space will keep pushing the limits and fighting against any regulation.
Of course we need to spread awareness, like what you're doing with this article, but what more can we do? What can those of us that are interested, but without the same platform and influence, do?
It would help by using an example to illustrate why unfettered AI can be dangerous and how regulation can not only protect from bad actors but also enable innovation to flourish - we need both.
This creating regulator call coming from companies with lead in the space is because they want it as a barrier to entry to reduce avoid competition and build competitive advantage
I hope that it wasn't intentionally that you left out the "argument against" of Captive Control. If we cant guarantee that the new regulation or potential agency would become captive to the interests of the largest players in the space then I could not support this. I would love to see some kind of blanket prioritization of addressing the Regulatory Capture issue. Let's work on prioritizing that nationwide risk that is much greater than this. imo
I believe it's not premature to discuss regulatory frameworks—these discussions should be as forward-looking as the technology itself. History has shown us that waiting until problems arise can lead to adverse consequences. Yes, regulations might slow down innovation in some areas, but they could also provide a clear, fair, and safe path for progress. The key is to achieve a balance where regulations are not stifling, but rather set guidelines to prevent misuse and promote ethical and responsible AI development. This is not about fear of progress; it's about being proactive and thoughtful about the implications of the technologies we create.
Hard not to see the Regulator being held captive to the same type of special interests that blight all industry such as Sugar, Banking, Pharma.........
Good opportunity to apply the two-but rule: “But regulatory capture could be avoided or ameliorated if...” #2buts 2buts.com
if you use FDA as a positive example, you are in a fool's paradise. Maybe you should use FAA.
FDA completely failed to ensure that the mrna vaccine companies do proper trials to analyze safety or even effectiveness. The mrna randomized controlled trials (the only reliable method of assessing causality) only showed the reduction in symptomatic PCR-positivity, not in more meaningful outcomes like actually reducing hospitalizations, overall deaths, or transmission: the claims FDA itself promoted without good evidence https://www.bmj.com/content/371/bmj.m4037 . Even after data showing that young men are more at risk after hospitalization after the mrna covid vaccine than covid itself, the FDA continued to promote those vaccines and their mandates.
FDA routinely approves expensive drugs based on surrogate endpoints, not showing effectiveness in reducing actually meaningful outcomes: e.g. https://www.science.org/content/blog-post/aducanumab-approval
As another example, they approved Nexetol just based on LDL lowering, not based on actual reduction in cardivascular deaths or heart attacks (https://www.acc.org/latest-in-cardiology/articles/2020/02/24/10/09/fda-approves-bempedoic-acid-for-treatment-of-adults-with-hefh-or-established-ascvd read the last sentence), even though it is well known that many drugs that lower LDL actually increase cardiovascular deaths, e.g. Clofibrate https://www.nejm.org/doi/full/10.1056/NEJMoa0706628
FDA routinely hinders cheap but effective agents like NMN (https://twitter.com/davidasinclair/status/1603513997768138756)
and NAC, which is effective in many ways and has decades of proven safety trackrecord.
In summary, the FDA is more about protecting the financial interests of the pharma companies they have CoI with, much less about protecting consumers.
I do see a need for an agency that ensures that supplements/medicines have no contamination and have exactly what is stated in the label. But that could be done by market forces, such as "Amazon/ConsumerReports Certified Integrity" label
Surely the genie is out of the bottle. How do you regulate AI development in countries like China & Russia as an example?
My big question is WHO is qualified to create the guardrails? How do they congregate and create non AI answers or is it all in AI based?
I agree with all your points. My question is-how do we actually get there? While you were at Facebook and admittedly took advantage of Section 230 and loose regulations, I assume you would have fought tooth and nail against these types of regulations being put in place. Up and coming AI entrepreneurs and companies with large interests in this space will keep pushing the limits and fighting against any regulation.
Of course we need to spread awareness, like what you're doing with this article, but what more can we do? What can those of us that are interested, but without the same platform and influence, do?
Well written and timely.
It would help by using an example to illustrate why unfettered AI can be dangerous and how regulation can not only protect from bad actors but also enable innovation to flourish - we need both.
Couldn’t of said it better myself Chamath this is so good
This creating regulator call coming from companies with lead in the space is because they want it as a barrier to entry to reduce avoid competition and build competitive advantage
I hope that it wasn't intentionally that you left out the "argument against" of Captive Control. If we cant guarantee that the new regulation or potential agency would become captive to the interests of the largest players in the space then I could not support this. I would love to see some kind of blanket prioritization of addressing the Regulatory Capture issue. Let's work on prioritizing that nationwide risk that is much greater than this. imo
Chamath, How do you avoid being scammed by other people? How do millions of people do it? Do you even know?
Could AI used by me help detect a scam AI used by you?
This article is you at your most presumptuous. What government regulation is perfect? Have you ever seen one you didn't like? How did that happen.
Chamath, you aren't as smart as you believe. You've been lucky.
Thank you for sharing your thoughts on this. It needed to be said.
You have a myopic American view of this. A world AI council is the ONLY way to impact effectively.
Wonderful Chamath
I believe it's not premature to discuss regulatory frameworks—these discussions should be as forward-looking as the technology itself. History has shown us that waiting until problems arise can lead to adverse consequences. Yes, regulations might slow down innovation in some areas, but they could also provide a clear, fair, and safe path for progress. The key is to achieve a balance where regulations are not stifling, but rather set guidelines to prevent misuse and promote ethical and responsible AI development. This is not about fear of progress; it's about being proactive and thoughtful about the implications of the technologies we create.
Thank you for taking the lead and speaking out. Perhaps industry experts can team up with a government agency to open a dialogue.
Great Piece