Who Controls the Edge of Computing? Utopian Visions, Dystopian Truth.

// April 03, 2024

The AI age is upon us. Neural networks, LLMs and all types of emergent generators are accelerating at a breakneck pace. Corporate funding for AI tech is at an all time high. We’ve been using automated systems for so much of our daily lives for decades, but now the bots are starting to talk. They’re starting to run our bureaucracies, our companies, our military systems. They’re starting to make decisions. They are a great societal boon, but they are also a threat. What we do next matters. How we source compute and data for AI, and how ownership of models is distributed throughout society, is an existential question we must face.

Building AI the Right Way

How the compute for AI models is generated is already cause for concern. Large corporations running huge proprietary datasets off centralized cloud servers pose a very real threat to our economic and social autonomy. If only those with access to compute and data have AI, they will win any capitalist race, and emerge as a dystopian cyberpunk zaibatsus in the blink of an eye. It’s not just the threat of being made into a paperclip, either. The decentralized movement of the open web and concerns about data privacy and how that data is used is very real in an age where LLMs are harvesting copyrighted material with impunity.

Away from ideological concerns, there are practical ones. Such top-down capital investment in any new network is expensive, inefficient and rent seeking. On an operational level, AI models making decisions in real-time cannot be sourcing their data from low latency data centers halfway across the world when managing operations like, say, driving a car. Your ISP having an outage can’t occur when AI systems are monitoring power plant controls. You can’t build huge compute and data resource dumps everywhere you want an advanced AI to operate. 

Edge Computing Potential

The answer to many is clear, AI compute, data and operational execution should exist on devices on the edge. Edge computing and the IoT has been a mainstream idea in tech since Alexa first infiltrated our homes. With DePin, crypto innovators are figuring out how access-control tokens can be used to manage these networks in a way that bestows ownership to the many, not the few. And with the rise of AI models and their expected integration into every aspect of business, bureaucracy and social administration, it’s taken on a new meaning. 

Edge computing for AI is using software to link every device and make them work in a decentralized (hopefully), orchestrated choir of compute and data to source the resources needed to run AI anywhere and everywhere. Local, efficient, and with the multiplicity of all the unused processing power in our pockets far outstripping any giga-datacenter that could ever be built by Amazon, Google, Microsoft, or any Big Tech company. 

Using Edge computing to generate the datasets, training, and processing of AI models has the potential to make them far more resilient, scalable, adaptive, and simply faster.  Data processing and analysis can happen in real-time directly on the Edge devices the model operates on, and ones nearby. A multi-region decentralized database with the fastest possible API retrieval and AI can run off them in data availability zones, with horizontal scaling potential.

Local sensors, transmitting devices, routers. All of them can collect data and resupply it immediately to appropriate models at low latency, leveraging each other’s compute and data to make the best decisions.  It also opens up democratic digital ownership possibilities for the AI models that prevent the problem of data and compute for essential AIs being hoarded by one particular company or group of individuals. It is, in short, the best way to advance AI technology both practically and morally. 

Edge Computing Challenges

Yet it comes with challenges. Major ones. Our current software architectures were not intended for this purpose, and new ones need to be built. A large company like Samsung may wish to make the 500 million devices they sell every year intelligent with one another, but such rigid vendor systems would stifle AI innovation and make their deployment too rigid, too expensive, and too profit-focused. To be clear, just because computing is done at the edge, it doesn’t mean that it can’t be centralized - anything but. Edge computing devices can be the worst culprits for data governance, management and compliance.

The key is to create openly available, device agnostic, privacy-retaining systems that lets data be used by Edge AI models only with given consent, and that all data compliance is adhered to. No easy task when you want distributed data across many devices to be instantly but only selectively accessible. Localised processing of sensitive data is tough if you don’t have powerful, universal, decentralized access policies that comply with all local data regulations. Devices in your area may be compliant, but the problem with deploying an AI model or any software on an edge device is that it may not be compliant, or it may be vulnerable to attack.  Managing and updating AI models across hundreds of edge devices is, as it stands, a logistical nightmare.

How Source Helps Developers at the Edge of AI

Source Network’s stack solves these challenges for AI developers at the edge.  Source provides all the tools necessary for AI developers to deploy Edge computing models with a fully decentralized stack.

DefraDB is a modular, locally deployable distributed database with access policies that are continuously updated and enforced by SourceHub, which acts as the trustlayer for activity. Data governance, access, authentication are all handled by the protocol through tools like the Orbis Secrets Management Engine. 

With DefraDB, developers can deploy local data safely and securely on any edge device, and have that data be interoperable with any other device they want to deploy it on. DefraDB ensures that both developers and end-users have complete ownership, and authorship, of their data, and that its use is only being put towards its purpose. As a huge tidal wave of devices begin connecting to one another through rollouts of edge computing, all developers need access to tools like DefraDB. Transforming data into models that work with DefraDB is also easy, thanks to LensVM.

Source is perfect for building monitoring tools for AI activity that are tamper proof and immutable, and for ensuring that datasets across multiple devices don’t have discrepancies, and that every device in the network is singing from the same song sheet, so to speak. By using blockchain as a trustlayer for reading, writing and transforming data sets between discrete devices, Source Network helps facilitate truly open AI that is free to live on any device without the privacy, security and operational threat that centralized providers - (to be charitable, often through no fault of their own) - have. 

Actually Open AI

The future world of self-driving cars and robot surgeons is not so fanciful anymore. Using edge computing to power AI is the only way it ideologically or practically makes sense, especially in the long run. Decentralization means better redundancy, better privacy and, even if that is all you care about, better operational efficiency for AI models operating in every conceivable sector of the future economy. 

It also opens up avenues of democratic broad-basis ownership for AI models that would accelerate their growth as they proliferate through more and more willing devices, and hyper-scalable AI with compute donated by every member of society with vested interests in its propagation and growth. That is how we get to a singularity, if that is indeed our aim. Yet you have to do it right from the start.

We are at a crossroads. If we continue on the path we are on, where capital compute and data for AI is continually hoarded in proprietary centralized labs and where the latest models are only available for rent, then we are doomed to dystopia. Pandora's box might be opened in some corporate office before any of us know it exists. We may not own any of our creative commons again. More mundanely, the benefits of AI advancement will only benefit the hyper-elite, an entrenched digital inequality.

Yet there is another way. Where the first vision of the free internet is recaptured. Where AI models source their compute and data from devices that are governed by a vast tamperproof decentralized network in which we all have a stake. Where an AI using your edge device to power its services pays you back for the privilege, or at least lets you use the operation it's performing for free. 

Where open-source innovation thrives securely at the fringes with local models paying their dues for data they use with the capital returned in kind. Where everyone who owns a mobile phone takes a very real stake in the advancement of open AI. Where everyone has a genuine vote on how these models could and should be used. A democratic collective commons about the promethean fire we have unearthed, and the utopia we could create with it.

To achieve that utopia, every aspect of the stack that powers edge devices must be decentralized too. What Bitcoin was for money, Source can be for Edge AI. Access control policies, data storage, device management, authentication, computing power. Source Network’s tools achieve just that, and they’re available right now for you to use.

Dive Deeper

// May 21, 2024

Unlock Your Database and Open Up Your Potential

Source Team
// May 15, 2024

The Extra Mile: Data’s Path to True Decentralization

Stay up to date with latest from Source.

Unsubscribe any time. Privacy Policy