Professor - Excellent sleuthing. "A fight for free, open-source AI is a fight for the future of humanity."
This demon spawns from the depths of hell - Oxford - funded in part by WEF & Musk. They share a bloody office together with CEA, and Toner is on staff. The name of this org? Future of Humanity Institute, also funded by The Future Of Life Institute - oh the irony of it all Mr. Mulder.
To understand my paradoxical positioning on AI safety, when a tild between being vehemently opposed, and being pissed of someone accelerate the hell out of it is complicated, but simple.
Years ago, in my former "professional" life, one of my persona goals, which I invested a lot of company resources in, was AI safety and disruption. Now I find myself in the polar opposite position of wanting to accelerate it, so everyone has equal access and almost equal capabilities. This moment will inevitably birth some problems down the road, but it is overall mostly positive.
ChatGPT and Claude.ai have given me 2 free days per week they have upped my productivity so hard. In one year, AI has given me 104 extra days. Of course, predictably, with all that new free time I get right back on the computer and start tinkering with my Emacs init file. But still.
Under moral grounds and personal choice I refuse to use AI in both of my Substack (it literally feels like a grift, like I am stealing money), but for other type of work I do, and it is astouding how it can enhances our productivity or cognitive output.
“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.” - C. S. Lewis
Microsoft had no say, sway or influence on the board choices, that was other board members I think.
Yesterday late at night new information came to light that current board directors (the people cited here) tried to merge OpenAI with Anthropic AI, which is the super safety and religiously Effective Altruist former OpenAI guys. This is, mostly, an ego trip and an EA attemped coup, and if they burnt down the leading AI lab in the world, "it is for the better, safety bla bla bla".
I said multiple times in the last 8 years Rationalists and EA (they are basically the same to me) are societal cancer.
Thanks for this. Both articles were very insightful and eye opening.
But I still dont understand what you think the WEF objective and endgame is, by standing behind the “abandon safety and full dev speed ahead” side? How does this enable their full spectrum control and anti-China agenda?
You can only compete against other nation with equally powerful models.
China WILL NOT stop developing and deploying models and unlike what the copiunm abusers state, they are not much behind the US. Now that EA morons killed Open AI they will catch up.
So being pro-acceleration ebales them to oppose China in some level, and develop the necessary tools to attempt control.
Looking forward to the index. Admittedly, I need to learn a lot more on AI. As of now it just seems like machine learning+ to scam stock prices higher and be used as a control mechanism. I definitely can see the benefit for midwit manipulation. “The magic box said XYZ is correct, it’s smarter and completely objective, therefore, taking this jab is mandatory”. An unquestionable Oracle that conveniently benefits the elites goals. This, to me, explains the mad dash to control it. Money and control.
Again, I’m way over my head here and I will definitely read more once I have a path for your past articles. These are just my skeptical thoughts. Thx again for your stacks! 🙏🏻
As I wrote as a reply to a few comments, calling Large Language Models "Artificial Intelligence" is a stretch and truly a selling point to these companies to get investments, BUT, the models (ChatGPT4) are already "super" intelligent, they are more intelligent than most of the planet.
The best way to learn about LLMs is to use it, and you will understand my positioning beyond my former academic past. Just use it, LLMs will be somewhat the OS's of the future.
In fact, if you want to have a massive trip, download ChatGPT on your phone and use the conversation feature (it is now free to all users) to have any type of conversation with the model. My index is mostly centered around health, not AI =P you may benefit health-wise, but not much in LLM/AI wise.
I rather not self-dox my academic work, but let me just say I have personal reasons to state Large Models are the future. We may reach a cap on their capacity on GPT5, and breakthroughs will be necessary to push the field further, but as of right now, GPT4 is incredibly capable.
Self driving cars, as with anything Elon Musk (another AI safety moron) is a grift. Everything he touches is tainted. I have a instinctevely distrust of the man.
I have wrriten about the exact problems you are talking about, especially how compute innefective current LLMs are (I refuse to describe language models as artificial intelligence), we are basically brute forcing results my throwing untold amounts of computational power into predictive models (that is what LLMs are, predictive and pattern matching machine).
Garbage in, garbage out is antiquated argument that stands no ground anymore, since there is a lot of data science that goes, the large costs of training a LLM comes from exactly the amount of "cleaning", labelling, structuring needed on the large data sets, before ANY training on its neural networks is done.
Hallucinations happen with or without data, it is a "problem" inherent of current models, but it can be mitigated by various methods, new ones coming up every day.
Current models are much more powerful than most people realize, I would expect that most small groups of researchers inside OpenAI are aware how powerful they are.
To the energy argument/brute forcing, the moment better algorithms, models, training methods and processing units (GPUs are power hogs like nothing else, tensors also suck up a lot of power, even if I was gifted a H200 my house can's stand it, my house can barely stand a 4080 lmao).
Professor - Excellent sleuthing. "A fight for free, open-source AI is a fight for the future of humanity."
This demon spawns from the depths of hell - Oxford - funded in part by WEF & Musk. They share a bloody office together with CEA, and Toner is on staff. The name of this org? Future of Humanity Institute, also funded by The Future Of Life Institute - oh the irony of it all Mr. Mulder.
https://www.fhi.ox.ac.uk/ai-governance/govai-2019-annual-report/#team
https://en.wikipedia.org/wiki/Future_of_Humanity_Institute
https://en.wikipedia.org/wiki/Centre_for_Effective_Altruism
https://en.wikipedia.org/wiki/Future_of_Life_Institute
To understand my paradoxical positioning on AI safety, when a tild between being vehemently opposed, and being pissed of someone accelerate the hell out of it is complicated, but simple.
Years ago, in my former "professional" life, one of my persona goals, which I invested a lot of company resources in, was AI safety and disruption. Now I find myself in the polar opposite position of wanting to accelerate it, so everyone has equal access and almost equal capabilities. This moment will inevitably birth some problems down the road, but it is overall mostly positive.
ChatGPT and Claude.ai have given me 2 free days per week they have upped my productivity so hard. In one year, AI has given me 104 extra days. Of course, predictably, with all that new free time I get right back on the computer and start tinkering with my Emacs init file. But still.
Under moral grounds and personal choice I refuse to use AI in both of my Substack (it literally feels like a grift, like I am stealing money), but for other type of work I do, and it is astouding how it can enhances our productivity or cognitive output.
I call LLMs "Cognitive enhancers".
Thank you for shedding light on these developments.
Eye opening indeed. My head spins, lol.
“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.” - C. S. Lewis
Just a thought but what if the Board were put in place to fire Altman like this and burn it down so Microsoft got it cheap?
Microsoft had no say, sway or influence on the board choices, that was other board members I think.
Yesterday late at night new information came to light that current board directors (the people cited here) tried to merge OpenAI with Anthropic AI, which is the super safety and religiously Effective Altruist former OpenAI guys. This is, mostly, an ego trip and an EA attemped coup, and if they burnt down the leading AI lab in the world, "it is for the better, safety bla bla bla".
I said multiple times in the last 8 years Rationalists and EA (they are basically the same to me) are societal cancer.
Thanks for this. Both articles were very insightful and eye opening.
But I still dont understand what you think the WEF objective and endgame is, by standing behind the “abandon safety and full dev speed ahead” side? How does this enable their full spectrum control and anti-China agenda?
You can only compete against other nation with equally powerful models.
China WILL NOT stop developing and deploying models and unlike what the copiunm abusers state, they are not much behind the US. Now that EA morons killed Open AI they will catch up.
So being pro-acceleration ebales them to oppose China in some level, and develop the necessary tools to attempt control.
Looking forward to the index. Admittedly, I need to learn a lot more on AI. As of now it just seems like machine learning+ to scam stock prices higher and be used as a control mechanism. I definitely can see the benefit for midwit manipulation. “The magic box said XYZ is correct, it’s smarter and completely objective, therefore, taking this jab is mandatory”. An unquestionable Oracle that conveniently benefits the elites goals. This, to me, explains the mad dash to control it. Money and control.
Again, I’m way over my head here and I will definitely read more once I have a path for your past articles. These are just my skeptical thoughts. Thx again for your stacks! 🙏🏻
As I wrote as a reply to a few comments, calling Large Language Models "Artificial Intelligence" is a stretch and truly a selling point to these companies to get investments, BUT, the models (ChatGPT4) are already "super" intelligent, they are more intelligent than most of the planet.
The best way to learn about LLMs is to use it, and you will understand my positioning beyond my former academic past. Just use it, LLMs will be somewhat the OS's of the future.
In fact, if you want to have a massive trip, download ChatGPT on your phone and use the conversation feature (it is now free to all users) to have any type of conversation with the model. My index is mostly centered around health, not AI =P you may benefit health-wise, but not much in LLM/AI wise.
I rather not self-dox my academic work, but let me just say I have personal reasons to state Large Models are the future. We may reach a cap on their capacity on GPT5, and breakthroughs will be necessary to push the field further, but as of right now, GPT4 is incredibly capable.
Self driving cars, as with anything Elon Musk (another AI safety moron) is a grift. Everything he touches is tainted. I have a instinctevely distrust of the man.
I have wrriten about the exact problems you are talking about, especially how compute innefective current LLMs are (I refuse to describe language models as artificial intelligence), we are basically brute forcing results my throwing untold amounts of computational power into predictive models (that is what LLMs are, predictive and pattern matching machine).
Garbage in, garbage out is antiquated argument that stands no ground anymore, since there is a lot of data science that goes, the large costs of training a LLM comes from exactly the amount of "cleaning", labelling, structuring needed on the large data sets, before ANY training on its neural networks is done.
Hallucinations happen with or without data, it is a "problem" inherent of current models, but it can be mitigated by various methods, new ones coming up every day.
Current models are much more powerful than most people realize, I would expect that most small groups of researchers inside OpenAI are aware how powerful they are.
To the energy argument/brute forcing, the moment better algorithms, models, training methods and processing units (GPUs are power hogs like nothing else, tensors also suck up a lot of power, even if I was gifted a H200 my house can's stand it, my house can barely stand a 4080 lmao).