AI has a monopoly on power and privacy. Blockchain fixes this.

Many Americans got their first glimpse behind the machine learning curtain when details of Amazon’s “Just Walk Out” technology went public. Instead of pure technology tallying up customers’ purchases and charging them when they left the store, the sales were manually checked by about 1,000 real people working in India.

But these workers were the human-half of what most AI really is — a collaboration between reinforcement learning and human intelligence.

The human element tends to be ignored in discussions of AI safety, which is a little disturbing given how much of an impact AI will likely have on our job markets and ultimately our individual lives. This is where decentralization, the inherent trustlessness and security of blockchain technology can play a major role.

The Center for Safe AI identifies four broad categories of AI risk. As a start, there’s malicious use, in which users might “intentionally harness powerful AIs to cause widespread harm” by engineering “new pandemics or [using them] for propaganda, censorship and surveillance, or [releasing AIs] to autonomously pursue harmful goals.”

A more subtle concern is the risk of an AI race, where corporations or nation states compete to quickly build more powerful systems and take unacceptable risks in the process. Unchecked cyberwarfare is a potential outcome, another is allowing systems to evolve on their own, potentially slipping out of human control; or a more prosaic, but no less disruptive outcome, could be mass unemployment from unchecked competition.

Organizational risks with AI are similar to any other industry. AI could cause serious industrial accidents, or powerful programs could be stolen or copied by malicious actors. Finally, there’s the risk that the AIs themselves could go rogue, “optimizing flawed objectives, drifting from their original goals, becoming power-seeking, resisting shutdown or engaging in deception.”

See also  Neo added to the PoPP AI Chrome Box extension, users can now query the AI assistant about the ecosystem

Regulation and good governance can contain many of these risks. Malicious use is addressed by restricting queries and access to various features, and the court system could be used to hold developers accountable. Risks of rogue AI or and organizational issues can be mitigated by common sense and fostering a safety-conscious approach to using AI.

But these approaches don’t address some of the second-order effects of AI. Namely, centralization and the perverse incentives remaining from legacy Web2 companies. For too long, we’ve traded our private information for access to tools. You can opt out, but it’s a pain for most users.

AI is no different than any other algorithm, in that what you get out of it is the direct result of what you put in — and there are already massive amounts of resources devoted to cleaning up and preparing data to be used for AI. A good example is OpenAI’s ChatGPT, which is trained on hundreds of billions of lines of text taken from books, blogs and communities like Reddit and Wikipedia, but also relies on people and smaller, more customized databases to fine-tune the output.

Read more from our opinion section: What can blockchain do for AI? Not what you’ve heard.

This brings up a number of issues. Mark Cuban has recently pointed out that AI will eventually need to be trained on data that companies and individuals might not want to share, in order to become more commercially useful beyond coding and copywriting. And, as more jobs are impacted by AI — particularly as AI agents make custom AI applications accessible — the labor market as we know it could eventually implode.

See also  Bit Rivals and Telos Partner to Improve Platform Integrations for RIVAL Launch

Creating a blockchain layer in a decentralized AI network could mitigate these problems.

We can build AI that can track the provenance of data, maintain privacy and allow individuals and enterprises to charge for access to their specialized data if we use decentralized identities, validation staking, consensus and roll-up technologies like optimistic and zero-knowledge proofs. This could shift the balance away from large, opaque, centralized institutions and provide individuals and enterprises with an entirely new economic system.

On the technological front, you need a way to confirm the integrity of data, the ownership of data and its legitimacy (model auditing).

Then, you would need a method of provenance, (to borrow a phrase from the art world), which means being able to see any piece of data’s audit trail in order to properly compensate whoever’s data is being used.

Privacy is also important — a user must be able to secure their data on their own electronics and be able to control access to their data, including being able to revoke that access. Doing so involves cryptography and a security protection certification system.

This is an advancement from the existing system, where valuable information is merely collected and sold to centralized AI companies. Instead, it enables broad participation in AI development.

Individuals can engage in various roles, such as creating AI agents, supplying specialized data or offering intermediary services like data labeling. Others might contribute by managing infrastructure, operating nodes or providing validation services. This inclusive approach allows for a more diversified and collaborative AI ecosystem.

We could create a system that benefits everyone in the system, from the digital clerics a continent away to the shoppers whose cart contents provide them raw data to developers behind the scenes. Crypto can provide a safer, fairer, more human-centric collaboration between AI and the rest of us.

See also  Avail, Blockchain Data Availability Project, Sketches Out Eligibility for Token Airdrop

Sean is the CEO and co-founder of Sahara, a platform building blockchain-powered infrastructure that is trustless, permissionless, and privacy-preserving to enable the development of customized autonomous AI tools by individuals and businesses. Additionally, Sean is an Associate Professor in Computer Science and the Andrew and Erna Viterbi Early Career Chair at the University of Southern California, where he is the Principal Investigator (PI) of the Intelligence and Knowledge Discovery (INK) Research Lab. At Allen Institute for AI, Sean contributes to machine common sense research. Prior, Sean was a data science advisor at Snapchat. He completed his PhD work in computer science at University of Illinois Urbana-Champaign and was a postdoctoral researcher at Stanford University Department of Computer Science. Sean has received several awards recognizing his research and innovation in the AI space including Samsung AI Researcher of the Year, MIT TR Innovators Under 35, Forbes Asia 30 Under 3, and more.

Source link