Most of us think of Artificial Intelligence (AI) as this wonderful and, in some ways, daunting technology that can change the world. AI tools like ChatGPT and Bart became our new search engines, and AI’s prospects and perils have mobilized governments and policymakers worldwide to regulate the technology.
But only some of us know what lies beneath what the world perceives as AI: that AI summons its muscle from the sheer computing power provided by large-scale data centers and supercomputers.
Think of tens or hundreds of thousands of computing systems racked, stacked, and clustered in warehouse-scale facilities of up to 1 million square feet. These systems have the capacity and computational power to host the massive amounts of data that we need to train AI models and support millions of people who, at any given time, try to use these AI services for good or evil.
People are also reading…
Because warehouse-scale computing behemoths provide AI’s power, it is expensive for people and the environment. Data centers are well-known sources of major environmental pollution.
And while we may think that AI is free for all, we often overlook the fact that accessing any of these AI services for our own good requires fairly expensive smartphones, tablets, laptops, and pricey subscriptions to high-speed internet services, which are not always available everywhere, and more generally a great number of digital assets that may not be available to everyone.
Further concerns arise that AI technology is exclusively owned by a very small number of big technology companies that host all user data and use them at will.
My current research, funded by the National Science Foundation and by Sony, looks into how to do “AI in the cheap and small” and make it more affordable, sustainable and accessible to anyone. I am looking into how to use inexpensive mobile devices, community network hubs, and drones to provide the same AI services we currently get from the so-called “Big Tech” warehouse-scale computers.
This may seem ironic to my scientific community, as I am mostly known for my research on how to program and manage the abundant hardware resources of warehouse-scale computing systems. A shift to “miniaturizing” my research to make AI accessible and sustainable may seem counterintuitive. But it makes sense, because the software and hardware components that I developed in my research have one common denominator: trying to pack as much computation and data in the smallest pieces of hardware possible.
Doing AI in the small is valuable to many people who do not have the means to use and benefit from Big Tech AI. It might also be safer if people do not have to share their data with Big Tech companies for the sake of building new AI models that benefit only a few. Miniaturizing AI can be about rural communities that do not have good broadband internet access or the latest personal devices. Or it can be about inexpensive wearable systems that can monitor biological signals and save lives. I believe miniaturizing AI is feasible, safe, and affordable. It is a means of democratizing AI, a misunderstood term that means bringing the benefits of AI to everybody. Miniaturizing AI also addresses some of the most pressing concerns about the technology’s environmental sustainability and its predatory nature.
A famous saying suggests that “good things come in small packages.” With some research success, that could one day apply to AI.