AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

Cryptocurrency

AI and crypto isn’t just a buzz phrase

AI Eye has been out and about at Korean Blockchain Week and Token2049 in Singapore over the past fortnight, trying to find out how crypto project leaders plan to use AI.

Probably the most well-known is Maker founder Rune Christensen, who essentially plans to relaunch his decade-old project as a bunch of sub-DAOs employing AI governance.

“People misunderstand what we mean with AI governance, right? We’re not talking about AI running a DAO,” he says, adding the AI won’t be enforcing any rules. “The AI cannot do that because it’s unreliable.” Instead the project is working on using AI for coordination and communication — as an “Atlas” to the entire project, as they’re calling it.

“Having that sort of central repository of data just makes it actually realistic to have hundreds of thousands of people from different backgrounds and different levels of understanding  meaningfully collaborate and interact because they’ve got this shared language.”

Near founder Illia Polosukhin may be better known in AI circles as his project began life as an AI startup before pivoting to blockchain. Polosukhin was one of the authors of the seminal 2017 Transformer paper (“Attention Is All You Need”) that laid the groundwork for the explosion of generative AI like ChatGPT over the past year.

Polosukhin has too many ideas about legitimate AI use cases in crypto to detail here, but one he’s very keen on is using blockchain to prove the provenance of content so that users can distinguish between genuine content and AI-generated bullshit. Such a system would encompass provenance and reputation using cryptography.

Near founder Illia Polosukhin in conversation with AI Eye in Seoul. (Andrew Fenton)

“So cryptography becomes like an instrument to ensure consistency and traceability. And then you need reputation around this cryptography, which is on-chain accounts and record keeping to actually ensure that [X] posted this and [X] is working for Cointelegraph right now.”

Sebastien Borget from The Sandbox says the platform has been using AI for content moderation over the past year. “In-game conversation in any language is actually being filtered, so there is no more toxicity,” he explains. The project is also examining its use for music and avatar generation, as well as for more general user-generated content for world-building.

Meanwhile, Framework Ventures founder Vance Spencer outlined four main use cases for AI, with the most interesting by far training up AI models and then selling them as tokens on-chain. As luck would have it, Frameworks has invested in a game called AI Arena, in which players train AI models to compete in the game.

Keep an eye out for in-depth Magazine features outlining their thoughts in more detail.



AI is for communists?

Speaking of AI and crypto, are they pulling in opposite directions? Dynamo Dao’s Patrick Scott dug up PayPal founder Peter Thiel’s thoughts on AI and crypto in his forward to the re-release of the 1997 non-fiction book The Sovereign Individual, which predicted cryptocurrency, among other things. In it, Thiel argues AI is a technology of control, while crypto is one of liberation.

“AI could theoretically make it possible to centrally control an entire economy. It is no coincidence that AI is the favorite technology of the Communist Party of China. Strong cryptography, at the other pole, holds out the prospect of a decentralized and individualized world. If AI is communist, crypto is libertarian.”

Roblox lets users build with AI

Roblox has unveiled a new feature called Assistant, which will let users build virtual assets and write code using generative AI. In the demo, users write something like “make a game set in ancient ruins” and “add some trees,” and the AI does the rest. It’s still being developed and will be released at the end of this year or early next year. The plan is for Assistant to one day generate sophisticated gameplay or make 3D models from scratch.

Roblox Assistant (Roblox)

Terrible workers benefit most from AI

The worst workers at your place of employment are likely to benefit the most from using AI tools, according to a new study by Boston Consulting Group. The output of below-average workers improved by 43% when using AI, while the output of above-average workers improved by just 17%.

Interestingly, workers who used AI for things beyond its current abilities performed 20% worse because the AI would present them with plausible but wrong responses.

Google Gemini gears up for release

Google’s GPT-4 competitor is nearing release, with The Information reporting that a small group of companies has been given early access to Gemini. For those who came in late, Google was seen leading the AI race right up until OpenAI dumped ChatGPT on the market in November last year (arguably before it was ready) and leaped ahead.

Google hopes Gemini can best GPT-4 by offering not just text generation capabilities but also image generation, enabling the creation of contextual images (rumors suggest its being trained on YouTube content, among other data). There are plans in future for features like using it to control software with your voice or to analyze charts. Highlighting how important Gemini is, Google co-founder Sergey Brin is said to be playing an instrumental role in the evaluation and training of the models.

Read also


Features

Should we ban ransomware payments? It’s an attractive but dangerous idea


Features

Escape from LA: Why Lockdown in Sri Lanka Works for MyEtherWallet Founder

AI expert Brian Roemmele says he’s been testing a version of Gemini and finds it “equivalent to ChatGPT-4 but with newly up to the second knowledge base. This saves it from some hallucinations.”

Google CEO Sundar Pichai told Wired this week he has no regrets about not launching its chatbot early to beat ChatGPT because the tech “needed to mature a bit more before we put it in our products.”

“It’s not fully clear to me that it might have worked out as well,” Pichai said. “The fact is, we could do more after people had seen how it works. It really won’t matter in the next five to 10 years.”

AI meets 15-minute cities

Researchers at Tsinghua University in China have built an AI system that plans out cities in line with current thinking about walkable “15-minute cities” that have lots of green space (please direct conspiracy theories about the topic to X).

The researchers found the AI was better at tedious computation and repetitive tasks and was able to complete in seconds what human planners required 50 to 100 minutes to work through. Overall, they determined it was able to improve on human designs by 50% when assessed on access to services, green spaces and traffic levels.

The headline figure is a bit misleading, though, as the finished plans only increased access to basic services by 12% and to parks by 5%. In a blind judging process, 100 urban planners preferred some of the AI designs by a clear margin but expressed no preference for other designs. The researchers envisage their AI working as an assistant doing the boring stuff while humans focus on more challenging and creative aspects.

Stephen Fry is cloned

Blackadder and QI star and much-loved British comedy institution Stephen Fry says he has become a victim of AI voice cloning.

On September 14, Fry played a clip from a historical documentary he apparently narrated at the CogX Festival in London last week — but revealed the voice wasn’t him at all. “I said not one word of that — it was a machine,” he said. “They used my reading of the seven volumes of the Harry Potter books, and from that dataset an AI of my voice was created, and it made that new narration.”

Training AI to rip off the work of actors and repurpose them elsewhere without payment is one of the key issues in the current actors and writers strike in Hollywood. Fry said the incident was just the tip of the iceberg, and AI will “advance at a faster rate than any technology we have ever seen. One thing we can all agree on: it’s a fucking weird time to be alive.”

Former QI host Stephen Fry (BBC)

How not to cheat using ChatGPT

The sort of academics drawn to cheating using ChatGPT appear to be the sort of people who make dumb mistakes giving that fact away. A paper published in the journal Physica Scripta was retracted after computer scientist Guillaume Cabanac noticed the “regenerate response” in the text, indicating it had been copied directly from ChatGPT.

Cabanac has helped uncover hundreds of AI-generated academic manuscripts since 2015, including a paper in the August edition of Resources Policy, which contained the tell-tale line: “Please note that as an AI language model, I am unable to …”

Physica Scripta gets called out over obviously AI-generated content.

All Killer No Filler AI News

— Meta is also working on a new model to compete with GPT-4 that it aims to launch in 2024, according to The Wall Street Journal. It is intended to be many times more powerful than its existing Llama 2.

— Microsoft has open-sourced a novel protein-generating AI called EvoDiff. It works like Stable Diffusion and Dall-E2, but instead of generating images, it designs proteins that can be used for specific medical purposes. This is expected to lead to new classes of drugs and therapies.

— Defense contractor Palantir, along with Cohere, IBM, Nvidia, Salesforce, Scale AI and Stability, have signed up to the White House’s somewhat vague plans for responsible AI development. The administration is also developing an executive order on AI and plans to introduce bipartisan legislation.

— Sixty U.S. senators attended a private briefing recently about the risks of AI from 20 Silicon Valley CEOs and wonks, including Sam Altman, Mark Zuckerberg and Bill Gates. Elon Musk told reporters afterward that the meeting “may go down in history as very important to the future of civilization.”

— ChatGPT traffic has fallen for three months in a row, by roughly 10% in both June and July and a further 3.2% drop in August. The amount of time users spend on the site fell from 8.7 minutes on average in March to seven minutes last month.

— Finnish prisoners are being paid $1.67 to help train AI models for a startup called Metroc. The AI is learning how to determine when construction projects are hiring. 

— The U.S. is way out in front of the AI race, with 4,643 startups and $249 billion of investment since 2013, which is 1.9 times more startups than China and Europe combined.

Read also


Features

When worlds collide: Joining Web3 and crypto from Web2


Columns

Wall Street disaster expert Bill Noble: Crypto spring is inevitable

Video of the week

Writer and storyteller Jon Finger tried out the HeyGen video app, which is able to not only translate his words but also clone his voice AND sync up his lip movements to the translated text.

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.