A Microsoft expert casually dropped this big tip for safe AI use


Summary created by Smart Answers AI
In summary:
- PCWorld reports that Ram Shankar Siva Kumar, head of Microsoft AI’s red team, is warning users to treat independent AI models as the “Wild West” of security risks.
- The expert compares the current AI landscape to early internet uploads, where malicious actors and insecure developers pose significant threats to user data and devices.
- Microsoft recommends extreme caution for small, unknown AI developers who may handle permissions insecurely or harbor malicious intent.
In the early days of the web, downloading files was new and fun. You might find all kinds of weird and interesting things, like MIDI versions of your favorite songs. But as much fun as unlimited Internet access was, danger lurked in some of these downloads, especially if they were “free” versions of popular paid software.
Microsoft says the same applies to AI.
At this year’s RSAC Cybersecurity Conference, I spoke with Ram Shankar Siva Kumar, Microsoft Data Cowboy and Head of the AI Red Team, who revealed a ton of facts about the behind-the-scenes work of securing AI and an important tip for staying safe while exploring this rapidly growing area of technology. What he advised: Be wary of independent AI models.
You might be tempted to see this suggestion as an attempt by Big Tech to stifle small, upstart competition (not an unfair thought, considering Microsoft’s tedious Copilot blitz last year). But the warning was phrased similarly to advice that experts and journalists began giving in the late 1990s and early 2000s: Just be careful who you download AI models from and what you download, if the developer is smaller and less well-known.
Because for every person who just wants to share their efforts, there could be a dangerous person to share information with or give access to your PC. Sure, they could be bad actors. It may also be people who are not yet able to handle such authorization securely.
Traditional software has had a similar journey. To date, smart habits include being careful about where you download files from, even with the later rise of storefronts that scan for malicious apps. We now need to extend this approach to AI as well. It’s the Wild West out there. Treat it as such, even if it is carefully packaged.


