It is early Saturday morning, the day after Random Wire 161 was published. Dawn is just around the corner and I’m reading a Gizmodo piece titled OpenAI’s Stargate Data Center Approved in Michigan as American Anger Starts to Boil.
It makes me wonder about many things. Power consumption. Water for cooling. Data transfer infrastructure. Roads. Local communities. Most of all, it makes me wonder about the impact of artificial intelligence (AI) on humans and wonder how this will all play out over time.
I’m not opposed to progress. As a boomer, the idea of using science and technology to solve problems is embedded in my psyche. Nevertheless, AI causes me concern because I don’t feel I can trust AI, at least not yet.
The present state of AI seems somewhat analogous to GPS navigation in cars. I’m sure we’ve all heard stories of people who have accepted whatever navigation guidance their GPS gave them, only to end up stuck in a snowbank in a remote area. Blindly trusting technology isn’t the best idea when it comes to navigating with a GPS, nor is it wise to do so with an AI. The technology is too young and the information being assembled by AI engines remains too unvetted to trust completely.
AI output suggests precision but that doesn’t mean the answer is right. It isn’t always accurate.
Target shooters and hunters know about precision vs. accuracy. A very precise firearm allows the shooter to place rounds very close together, even when those rounds aren’t close to the spot being targeted. Precision means consistency in results. An accurate firearm places rounds, on average, around the spot being targeted, usually the center of the target: the bullseye. What these people desire is precision (consistently grouped data points) and accuracy (data points centered around the bullseye).
Says Google through it’s search engine AI:
To make AI results more accurate, provide specific context and constraints in your prompts, use structured formats (like JSON), break down complex tasks, give iterative feedback, and employ advanced prompting methods like Chain-of-Thought (CoT) to encourage step-by-step reasoning, all while leveraging features like Retrieval-Augmented Generation (RAG) for grounded answers. Continuously refine interactions and verify outputs, as AI accuracy depends heavily on prompt quality and can’t reach 100% in dynamic situations.
In other words, one would use iterative problem solving to help guide the AI engine to a “good” answer. A corollary of this is necessarily that you shouldn’t just accept the first answer you get, particularly if it’s a complex problem. Simplification: you shouldn’t trust the first result, especially for difficult or complex problems. I think AIs know about precision in the context of providing consistent answers. I don’t think AIs understand yet about accuracy, i.e., how close is the answer to being right and true.
I’ve worked with people long enough to know that some people will always accept whatever they are told. They simply take it as gospel when something like their GPS, or Alexa, or a search engine gives them an answer. These are the people I worry about the most because they can be easily guided toward incorrect conclusions and actions. These are the ended-up-in-a-snowbank people.
At the other end of the spectrum, some people will never believe what others tell them. We see this in present-day society in the political space where partisanship has become so strident and ideological that facts no longer matter. What you say matters less than how often and loudly you say it. I worry about these people, too, because of their selective hearing. This group of folks is probably less prone to ending up stuck in a snowbank. Still, they are sometimes easily led because they want to believe somebody, and who better to follow than the loudest, most persistent voices? Those bigger-than-life personalities must know more than everyone else, right?
So the bookends on the spectrum of how accepting people are of AI are those who believe everything they are told and those who question everything they are told. Neither of these approaches seem to work well with AI.
Perhaps because I’m named Thomas, I tend to question things, particularly the underlying basis or reason for particular strategies and actions. Once I’m satisfied that I understand, then I have enough information to adjust my point of view, if I wish to do so. Seeking understanding is, I think, a healthy approach to information. Often, though, the people on the extreme ends of the spectrum as described above do not find it healthy. They view my Doubting Thomas approach disruptive. My questions are somehow viewed as disruptive, and disruptive people are sometimes not heard well by others. Disruptors tend to be not highly trusted.
AI is a disruptor and it is not where we need it to be to be able to fully trust the results. I see it almost every time I ask an AI engine or LLM (large language model) very specific “how to” questions about Linux, electronics, or amateur radio. The engine gathers information from a wide range of sources and rolls it all up, even when the information doesn’t really fit. (To be fair, I’ve had the best luck with using an LLM for such questions but that is because I have already reviewed and selected the “best” information sources. It’s not the easy path, which is to simply ask an AI.) How to do something in Linux is presented by an AI as factual and workable, but it seems like half the time, it doesn’t work at all. Same with circuit design. Same with how to configure settings on a particular radio.
Am I part of the problem? I have concluded that yes, I am. In particular, when I query an AI about AllStarLink, I often see one of my articles referenced as a source. Unfortunately, some of those articles are already out of date, but the information lives on, even when it is no longer right. I think those of us who create content for others to consume bear a special responsibility for information quality, but it’s probably too big a lift to expect creators to go back and review their own published content for accuracy. That means older, out-of-date information will continue to be sourced by AI engines, perhaps long after such information should be used. (Maybe we should consider putting additional parameters on the quality of our information, like “this information only applies to AllStarLink version 2,” or something akin to what we see on food packaging — a “best used by” date.)
I see this particularly in answers to questions about Linux because not only are there different versions of the same flavor of Linux, there are also many different flavors of Linux. I’ll ask a question about Debian Linux and get a reference to Ubuntu Linux. Granted, they are similar, but they aren’t the same, and sometimes those answers are simply wrong. (I’m experiencing this now as I play with some OrangePi single board computers, a platform family that is less common than the Raspberry Pi family. AIs have even fewer data sources for the OrangePi devices, and some of the answers to my questions are absurdly wrong.) The AI engines are still not intelligent enough to know the differences between Linuxes and that those differences are important. AI responses are worded to sounded authoritative, and while they may be precise, I find they often lack accuracy. The responses are clustered together but sometimes are not near the bullseye.
I haven’t touched on radio topics much in this opinion piece because most of us probably aren’t using AI results in our radio play. I do ask AIs for help and sometimes I attempt to use that advice. Sometimes I just laugh at the absurdity of the results. And sometimes, I shake my head, knowing that someone, somewhere, will believe what the AI is presenting and end up creating more problems than had.
AI is with us now. It’s not going away. That’s the nature of progress. We humans get to do what we have been so good at for millenia — adapt. That’s right, humans will need to adapt to AI, even while people smarter than me attempt to shape AI to adapt to humans. I encourage you to query AIs with your radio questions because it helps train the AIs. Just as important, please do not believe everything that is fed back to you. Don’t be the person who gets stuck in a snowbank in the wilderness. Be a doubting Thomas and take what the AI gives you with a healthy grain of salt. Look for corroborating information from your own sources rather than simply trust the AI. We just aren’t there, yet.
Featured image: Photo by LJ Checo: https://www.pexels.com/photo/star-wars-r2-d2-2085831/




Leave a Reply