Quantum computing is pretty exciting…it will allow humankind to do things never before possible, even with banks of supercomputers churning away for years. We just reported recently about Microsoft’s new Majorana 1 processor chip, and Amazon and Google have also made advancements. We may see quantum computing in daily use in years not decades, now. That’s all cool…what is scary is that a quantum computer can break encryption that would take years for a regular supercomputer in seconds. Geekwire.com reports that there is essentially a parallel race on the develop ways to implement newer, more powerful encryption created by quantum computing that can’t be easily broken, and get that encryption out to companies…and particularly financial institutions before quantum computing is out in the wild and available to bad guys. Let’s hope that the so-called ‘DOGE’ that Elon Musk is using to wholesale chop government agencies doesn’t hit the National Institute of Standards and their Post-Quantum Cryptography Project! It will take years to deploy quantum created encryption to businesses and the public.
There are real plusses on AI models that are open, but there is a dangerous down side to them, too. One is that the Russians are working overtime to feed disinformation and Russian-slanted propaganda to them. According to gizmodo.com, picking up on a NewsGuard report, a propaganda network called Pravda produced more than 3.6 million articles in 2024 alone, which it found are now incorporated into the 10 largest AI models, including ChatGPT, xAI’s Grok, and Microsoft Copilot. It should be noted that the ‘Pravda’ network is not connected with the infamous Russian newspaper that was one of the two main propaganda arms of the Soviet Communist Party. It certainly picks up where that paper can go to disseminate propaganda, though. NewsGuard discovered in their audit that chatbots operated by the 10 largest AI companies collectively repeated false Russian Disinformation narratives 33.55% of the time, gave a non-response 18.22% of the time, and a debunk 48.22% of the time. NewsGuard refers to this as ‘AI grooming.’ By spinning up websites under seemingly legitimate-looking websites, the models are ingesting and regurgitating information they do not understand is propaganda. Couple this with ‘hallucinations,’ from AI and you can see the wisdom of always double checking what an AI model produces for you. Hey, you have the time…the AI generates its product in seconds!
Threads is test-driving adding ‘interests’ to profiles, in order to connect users and drive more engagement. This is no-doubt in response to Bluesky’s having a ‘description’ right under a user profile that allows people to say a little about themselves, and give their interests as well as disinterests! TechCrunch.com says Threads hopes to pick up more disgruntled X users. Along with custom feeds, they also hope to slow the fast growth of Bluesky. The Bluesky system works…I have 2200 followers there just since the election, and only 334 on Threads! Some of this is due to a number of people not wanting to use a Meta platform, but I think a lot of it is that you can quickly vet a request on Bluesky, and accept if their interests are similar, or block them if…for example…they appear to be a troll, or they just have pics showing off their body and list an Only Fans account.
There are a number of tools or apps out in the wild that do an amazing job of cloning a voice with only a few seconds of sampling of the actual voice. For those of us in the business and for famous actors, this is a huge issue that was part of the big SAG-AFTRA strike last year. But more than that, it can also mean scams, fraud, and the like for just normal folks going about life. Zdnet.com reports that Consumer Reports checked out 6 of the most widely known platforms…Descript, ElevenLans, Lovo, PlayHT, Resemble AI, and Speechify. Their tests found that four of the six…namely ElevenLabs, Speechify, PlayHT, and Lovo…didn’t have the technical ability to prevent cloning someone’s voice without their knowledge, or to limit the AI cloning to only the user’s voice. The so-called protection consisted of checkboxes and a consent statement. One of them…Descript…had the user read and record the consent statement and used that audio to create the clone. For non-professionals, the most common scam is one you have no doubt heard of. It involves cloning the voice of a family member and then using that recording to contact a loved one to request that money be sent to help them out of a dire situation. Because the victim thinks they are hearing the voice of a family member in distress, they are more likely to send whatever funds are necessary without questioning the situation. Again, if you get a call from a relative needing money right now, don’t bite. Use another means to try to contact like email, text, etc…and then you can utilize knowledge that only you or that family member would have to verify.
I’m Clark Reid and you’re ‘Technified’ for now.