Apple Music gives a whole new meaning to the phrase the hits just keep on coming. It’s not the opposing candidates, it’s public AI systems that are spreading election disinformation, and LockBit, the cybercriminal gang may be back from the dead and saying so long to the ChatGPT plugins, which went from innovation to legacy in only a few months.All this and more on the “they were so 2023” edition of Hashtag Trending. I’m your host, Jim Love, CIO of IT World Canada and TechNewsDay in the US.The European Commission has imposed a fine of about a billion dollars US on Apple for what it claims is Apple’s abuse of its dominant market position in the distribution of music streaming apps through its App Store. The Commission found that Apple had implemented restrictions on app developers, preventing them from informing iOS users about alternative and more affordable music subscription services available outside the app, known as “anti-steering provisions.” Such actions are deemed illegal under EU antitrust rules.This latest action from the European commission was initiated by a complaint from Swedish music streaming service Spotify.Margrethe Vestage, the executive vice president in charge of competition policy, said, “For a decade, Apple abused its dominant position in the market for the distribution of music streaming apps through the App Store. They did so by restricting developers from informing consumers about alternative, cheaper music services available outside of the Apple ecosystem. This is illegal under EU antitrust rules, so today we have fined Apple over €1.8 billion.” Apple, being the sole provider of an App Store for iOS users across the European Economic Area (EEA), controls every aspect of the iOS user experience, including the terms and conditions developers must comply with to reach iOS users. The Commission’s investigation highlighted that Apple’s anti-steering provisions banned developers from:Informing iOS users within their apps about subscription offers available on the internet outside of the app.Informing iOS users within their apps about the price differences between in-app subscriptions sold through Apple’s in-app purchase mechanism and those available elsewhere.Including links in their apps that lead iOS users to the app developer’s website where alternative subscriptions can be purchased. Additionally, app developers were restricted from contacting their newly acquired users, for instance, by email, to inform them about alternative pricing options after setting up an account.Apple has taken a series of hits from the European Commission, on its app store and now on its music service. It faces class action lawsuits about its closed system that forces users to pay for iCloud backups, this last action happening in the U.S.. But clearly Apple’s closed system, the thing that has made them a multi-trillion dollar company, is under attack.Apple has said it will appeal the order and maintains that consumers were not harmed by their actions and that Spotify had also not suffered any harm as they sell their subscriptions from the Spotify website and don’t have to pay Apple anything.Source include: A recent study has revealed that public AI chatbots have been spreading false and misleading information about the 2024 election. This came from research conducted by the AI Democracy Projects and Proof News, underscoring the critical need for regulatory oversight as AI increasingly influences political discourse.The study tested various AI models, including OpenAI’s ChatGPT-4, Meta’s Llama 2, Anthropic’s Claude, Google’s Gemini, and Mistral’s Mixtral, and found them all guilty of giving voters incorrect polling locations, illegal voting methods, and false registration deadlines. For example, Llama 2 incorrectly claimed that California voters could vote via text message, an illegal method in the United States. Additionally, none of the AI models tested could accurately identify the prohibition of campaign logo attire at Texas polling stations, such as MAGA hats.This spread of misinformation has prompted responses from the AI developers. Anthropic plans to release an updated version of its AI tool with accurate election information. OpenAI has expressed intentions to refine its approach. However, Meta dismissed the findings as “meaningless” and that has sparked controversy and raised questions about their commitment to curbing misinformation.That attitude from Meta could come at a cost as it may place pressure on the government to increase regulations on AI systems – although that may not be considered a bad thing by election officials – or the public.Sources include: The Federal Bureau of Investigation (FBI) and the UK’s National Crime Agency, along with agencies from ten countries, collaborated in Operation Cronos to target the LockBit ransomware gang, one of the world’s most successful ransomware groups. This operation, which began on February 20, led to the takedown of over 30 servers, acquisition of source code, decryption keys, affiliate details, chat logs, and more, effectively disrupting LockBit’s operations.The agencies involved in the operation added a twist of humour to their takedown by altering the traditional “Game over” message seen by users trying to connect to the seized sites. They included a loading animation featuring the flags of the agency consortium and embedded images with jokey file names. Additionally, they replicated LockBit’s countdown timer, which typically indicated the time left for victims to pay up, to count down to the unmasking of LockBit’s leader, known as LockBitSupp.LockBit, which emerged in 2019, had become the most successful ransomware gang by adopting a business-oriented model. It provided tools and managed negotiations with victims in exchange for a 20 percent cut from its affiliates, who conducted the actual hacking. The gang’s professional online presence and marketing strategies were notable, even running bug bounty programs to improve its operations and security.But it may turn out that Lockbit may have the last laugh on this one.Despite the initial success of Operation Cronos, LockBit and LockBitSupp resurfaced online just five days later, with LockBitSupp mocking the federal agencies’ efforts. The FBI and its partners had anticipated this comeback, emphasizing that they had obtained keys to assist thousands of victims. The future rounds of this ongoing battle between law enforcement and the LockBit gang will determine the ultimate victor.But it may have turned out that LockBit may have had a better recovery plan than most of its victims.Sources include: BBC News has launched a new “content credentials” feature, designed to prove the authenticity of images and videos used in their journalism. This feature, part of BBC Verify, allows users to see a button labeled “how we verified this” beneath images and videos on the BBC News site. Clicking on this button reveals the verification processes undertaken by BBC journalists, such as cross-referencing content with other sources, examining metadata, comparing locations, and checking for the correct casting of shadows, among other methods.This initiative aims to counter the spread of disinformation, AI-generated deepfakes, and other forms of manipulated content. It also seeks to help audiences distinguish between real and fake BBC content when encountered on external sites. Deborah Turness, CEO of BBC News, emphasized the importance of earning trust by showing audiences not just what the BBC knows, but how it knows it, highlighting the significance of transparency in today’s environment of deep fakes and misinformation.The “content credentials” feature incorporates a new technical standard that embeds information about the origins of media, including how it has been edited, functioning like an audit trail. This standard, developed by the Coalition for Content Provenance and Authenticity (C2PA) co-founded by BBC Research & Development, Adobe, and Microsoft, is freely available and has seen participation from major organizations like Google, Facebook, and OpenAI.Initially, content credentials will be available on select content published by the BBC Verify team on the BBC News site and app. Future plans include working with external publishers and social media networks to ensure these credentials are displayed wherever news is consumed, aiding in the quick identification of genuine BBC content.Sources include: BBC And finally, OpenAI is doing away with its plugins. These functions, added by third party developers, extended the functionality of OpenAI. The first, and in my mind, still the best way of linking ChatGPT to the internet is via a plug-in.What will replace the plug-ins? Apparently, ChatGPT wants to use its GPT’s – the feature that they introduced recently that allows anyone to write their own mini-GPT model that can also now be called from another chat session.That has a couple of problems. One, the plug-ins were created and tested to perform their functions and while there are a lot of them, they also had some kind of quality control.GPTs, on the other hand, might be a great idea, but were launched with no quality control and the number of them – one estimate was more than three million – makes it difficult to even conceive of how to sort through the mess and replace some of the plugins that users have come to count on.A lesson to all of us – these public AI models are evolving rapidly and they may not feel an obligation to support any legacy functions. And in the world of AI, a “legacy function” could be measured in months, giving little time to react. That’s our show for today.Love your comments. Send us a note at [email protected] or drop us a comment under the show notes at itworldcanada.com/podcasts – look for Hashtag Trending. Thanks for listening and have a Terrific Tuesday.