Full list of content consumed, including annotations
71 highlights & notes
17 minutes Engaged reading (read 09/09/25)
airesilience.substack.com | articles
The same AI capabilities underlie benefits and harms. How can we assure AI safety without curtailing beneficial applications? Defensive acceleration charts a possible course.
8 minutes Engaged reading (read 08/06/25)
arstechnica.com | articles
New Duke study says workers judge others for AI use—and hide its use, fearing stigma.
1 minutes Engaged reading (read 02/28/23)
arstechnica.com | articles
Opinion: The worst human impulses will find plenty of uses for generative AI.
4 minutes Engaged reading (read 08/25/25)
arstechnica.com | articles
"We’re going to have supervision," says billionaire Oracle co-founder Ellison.
6 minutes Engaged reading (read 08/11/25)
The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. "At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack," the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.
"A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens," they note, "would require 40,000 articles costing under US$100.00 to generate." The "articles" themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren't displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.
arstechnica.com | articles
Changing just 0.001% of inputs to misinformation makes the AI less accurate.
4 minutes Engaged reading (read 09/02/25)
blog.samaltman.com | articles
This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it. If you’re already afraid of machine intelligence,...
1 minutes Engaged reading (read 08/11/25)
3 minutes Engaged reading (read 09/02/25)
futurism.com | articles
AI companies are pouring so much money into AI, it may be propping up the US economy, which could make a bubble pop an even bigger disaster.
2 minutes Engaged reading (read 08/25/25)
OpenAI's o4-mini model hallucinated 48 percent of the time on its in-house accuracy benchmark, showing it was terrible at telling the truth. Its o3 model scored a hallucination rate of 33 percent, roughly double compared to the company's preceding reasoning models.
futurism.com | articles
Artificial intelligence models have long struggled with hallucinations. The problem is getting worse as they become bigger and smarter.
3 minutes Engaged reading (read 08/25/25)
But this approach is "unlikely to be a silver bullet," Arvind Narayanan.researchers claimed with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, test-time computeCheaper, more efficient approaches are being explored. OpenAI has used a method known as
futurism.com | articles
A survey of nearly 500 AI researchers indicated that the vast majority of them think that pursuing "scaling" won't achieve AGI.
8 minutes Engaged reading (read 07/29/25)
Also, generative AI content created according to an organization’s prompts could contain another company’s IP. That could cause ambiguities over the authorship and ownership of the generated content, raising possible allegations of plagiarism or the risk of copyright lawsuits.
3 A company lawyer specifically stated that their inputs could be used as training data for the bot and its future output could include or resemble Amazon’s confidential information.2Amazon has already sounded the alarm with its employees, warning them not to share code with ChatGPT.
That includes the data that is inputted from its various users, which the tool retains to continually learn and build its knowledge. That data, in turn, could be used to answer a prompt inputted by someone else, possibly exposing private or proprietary information to the public. The more businesses use this technology, the more likely their information could be accessed by others.1Generative AI technology uses neural networks that can be trained on large existing data sets to create new data or objects like text, images, audio or video based on patterns it recognizes in the data it has been fed.Among the top risks around the use of generative AI are those to intellectual property.
kpmg.com | articles
The flip side of generative AI: Challenges and risks around responsible use.
6 minutes Engaged reading (read 09/02/25)
mariechristinestyves.ca | articles
Learn about the latest techniques being used to take advantage of older adults, such as deepfakes and AI-enabled spam calls. Gone are the days when fraudsters relied solely on telemarketing scams, robocalls or phishing emails. New technology and the rise of artificial intelligence (AI) have given them new ways to scam Canadians, especially seniors and vulnerable people.1 In a nutshell, artificial intelligence technology enables computers and machines to do things that once required human intelligence. For instance, this can include understanding language, recognizing patterns, making decisions and learning from experience. Unfortunately, AI scammers are tapping into this technology to deceive or trick people. From fake phone calls with cloned voices to online scams using chatbots, cybercriminals are using AI to update and enhance their old methods. In too many cases, these tech-savvy techniques are proving to be highly effective. Read on to learn some of the most common types of AI scams — and tips for how to avoid them. Common types of AI scams targeting seniors Nowadays, it’s not just cyber fraud, identity theft and scams such as home improvement scams that seniors need to watch out for. AI scams are on the rise, and it’s important to be aware of the – common types of these scams targeting seniors. When you received a scam phone call in the past, you might’ve heard a robotic voice, realized it was fake and hung up. But what if the voice on the other end of the line sounds human and like someone you know? This is an example of a deepfake voice scam enabled by AI technology. Fraudsters clone voices and trick people into handing over money or personal details. The scammers often know where their target lives or the names of family members.2 Security experts claim this voice scam is easy, cheap, and convincing — and on the rise. Fraudsters only need a short audio clip, often pulled from social media, to copy someone’s voice. Grandparent scams enhanced by AI In traditional grandparent scams, a senior gets a call from someone who pretends to be their grandchild in trouble. The caller asks for money to deal with an emergency, maybe claiming to be in jail or stranded in another country, but doesn’t want anyone else to know. Sometimes, a second person poses as a lawyer or police officer and joins the call to demand payment or bail money right away. Deepfake technology takes grandparent scams to a new level. The calls are more convincing, and some scammers will even use fake AI-generated photos to help make their case.3 Fraudsters might also reach out via email, text, or social media, pretending to be a grandchild, asking for money or gift cards.4 In one instance, the imposter claims their phone is broken and needs to share their “new” phone number. They text or message through social media to ask for money to repair the broken phone or pay a bill.5 Romance scams with AI chatbots More and more seniors are getting online to meet new people, especially if they’re looking to start dating again. But it also makes them targets for romance scams, which are running rampant, according to the Canadian Anti-Fraud Centre. With romance scams, a scammer lures you into a cyber relationship through email or fake profiles on social media or dating sites. They seek to gain your trust and affection — and money. Scammers often go after seniors who are recently divorced or widowed because this demographic may be vulnerable and have money on hand. AI has made it easier for romance scams to look believable. Scammers are tapping into tools like voice simulators, face generators and deepfakes to create faux but realistic videos with existing images or footage.6 They may then use these phony videos or images to pose as loved ones or romantic interests and to convince you to send money or share personal details. Scammers also misuse AI chatbots, like ChatGPT, which can generate text responses that sound human. It makes romance scams even tougher to detect, as these chatbots can have longer conversations, express emotion, and change how they respond based on how you react. Gaining your trust and making sure you’re attached can help scammers succeed.7 How to detect and prevent AI scams While AI scams are on the rise, there are a few things you can do to avoid these kinds of scams: Stay skeptical. Don’t trust random calls or messages asking for personal details or money, even if they claim to be a loved one. Be suspect of anyone who’s pushy, demanding or in distress. Be wary of friendly or romantic advances from people you meet online, especially if they seem too good to be true or the relationship escalates quickly. When in doubt, hang up the phone or report the messages or account. Verify identities. Call the person who allegedly contacted you and check out the story. Make sure you use a phone number you know for sure is correct — not the one the caller gives you. If the caller claims to be a law enforcement official, hang up and contact your local police directly. Code word. Establish a code word with your family and trusted friends, something that is not available on social media. If you receive an unusual voice or video call from them, ask for the code word to validate the legitimacy of the call or reach out to them through a different means of communication to confirm it’s truly them contacting you. Watch out for audiovisual red flags. Look closely for inconsistencies in lighting, shadows or reflections; unnatural movements, expressions or gestures; and mismatched lip-syncing. Also, be alert to audio inconsistencies, like background noises that don’t match the environment or an unnatural tone and pitch. Other signs that something isn’t right are when there are misaligned or mismatched features (such as eyes, mouth or teeth), poor-quality visuals, blurriness or glitches. Never send money. Scammers often demand that you send money through ways that make it hard to trace or retrieve the funds. For instance, they may ask you to wire money, send cryptocurrency or buy gift cards. Remember, the Canadian Criminal Justice System doesn’t allow someone to be bailed out of jail with cash or crypto.8 Converse with caution. Be wary of rapid-fire online chats, as AI-generated responses may answer back quickly. If an online chat feels weird or “off,” you may be talking to a chatbot. AI-generated text or chats often contain repeat phrases or answers that have nothing to do with what you’re talking about.9 Secure your devices. Protect your computer, smartphone and tablets with up-to-date security software. Use strong passwords, and don’t click on links that seem suspicious. Safeguard sensitive information. Don’t share personal details or documents with anyone over the phone, text message, social media, or the internet. Your financial institution will never ask for personal or financial information like account numbers, PINs, one-time passcodes or passwords through email or text message. Be in the know. The Canadian Anti-Fraud Centre has a wealth of information about recent scams and fraud, including AI. Knowing the signs of fraud can help you spot and avoid a scam immediately. Ask for help. If you’re not sure about something, ask a trusted family member or friend for advice. Don’t be afraid to report suspicious activity to the police — it might stop scammers from tricking others. Impact of fraud on your finances and health The impact of fraud on your finances can be significant. The Canadian Anti-Fraud Centre reported that more than $9.2 million was lost by seniors to emergency-grandparent scams in 2022 — a huge uptick from $2.4 million in 2021.10 While the impact of financial losses can be substantial, there are other reasons to stay vigilant. Fraud can also affect your: Emotional well-being: Falling for a scam can rattle anyone. When seniors learn they’ve been tricked, they may feel betrayed, embarrassed, or anxious. Health: The stress of losing money and feeling vulnerable may make physical or mental health problems worse or even trigger new ones. Trust: Falling for a scam may make it hard to trust others. Seniors may withdraw from social interactions out of shame, making feelings of loneliness and isolation worse. What to do if you’ve been scammed If you or a loved one has been scammed, take these steps: Call the authorities. If your gut tells you something isn’t right, consult local police right away. You can also file a report with the Canadian Anti-Fraud Centre. Notify financial institutions: Tell your bank and credit card companies about the fraud to prevent further losses. Get legal advice. Consult a lawyer to learn your rights and explore legal options. Inform the credit bureaus. Ask TransUnion and Equifax to put a fraud alert on your credit report. Check your credit report on a regular basis. Keep an eye on your accounts. Monitor your bank accounts, credit reports and other financial statements. Report any suspicious activity related to Scotiabank accounts right away. Seek support. Reach out to family members, friends or support groups for emotional support. Remember, the blame primarily lies with the scammers. Bottom line If you’ve been scammed, you may feel embarrassed and want to ignore it. But it’s crucial to tell someone — reporting the scam can protect you and others from falling for the same trickery. 1 https://publications.gc.ca/collections/collection_2022/grc-rcmp/PS61-46-2021-eng.pdf 2 https://www.cbc.ca/news/canada/newfoundland-labrador/ai-vocal-cloning-grandparent-scam-1.6777106 3. https://globalnews.ca/news/9722136/artificial-intelligence-scams-bbb/ 4 https://antifraudcentre-centreantifraude.ca/scams-fraudes/emergency-urgence-eng.htm 5 https://antifraudcentre-centreantifraude.ca/scams-fraudes/emergency-urgence-eng.htm 6 https://ottawa.citynews.ca/2023/02/12/online-romance-scammers-may-have-a-new-wingman-artificial-intelligence-6529849/ 7 https://briefings.cba.ca/from-sweet-talk-to-double-cross-uncovering-the-latest-tricks-of-romance-scammers 8 https://antifraudcentre-centreantifraude.ca/scams-fraudes/emergency-urgence-eng.htm 9 https://briefings.cba.ca/from-sweet-talk-to-double-cross-uncovering-the-latest-tricks-of-romance-scammers 10 https://www.rcmp-grc.gc.ca/en/news/2023/rcmp-cafc-opp-raise-awareness-after-increase-emergency-grandparent-scams
6 minutes Engaged reading (read 07/28/25)
to hire people, approve loans, determine insurance rates, set prison sentences, and much more based on statistical correlations unearthed by AI algorithms that have no basis for assessing whether the discovered correlations are causal or coincidental. LLMs are not the solution. They may well be the catalyst for calamity.are already being usedMy fear is that people will be so bedazzled by articulate LLMs that they trust computers to make decisions that have important consequences. Computers
mindmatters.ai | articles
The most relevant question is whether computers have the competence to be trusted to perform specific tasks.
1 minutes Engaged reading (read 08/11/25)
mindmatters.ai | articles
Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar.
11 minutes Engaged reading (read 05/16/25)
mindmatters.ai | articles
Not understanding what words mean or how they relate to the real world, chatbots have no way of determining whether their responses are sensible, let alone true
8 minutes Engaged reading (read 07/28/25)
AI is here to stay, and the disruption it brings is real. But the fear that it will replace human creativity is misplaced. Rather than supplanting human minds, it amplifies them — or, when misused, undermines them.
mindmatters.ai | articles
The real danger lies not in what AI can do, but in forgetting what only humans can do.
2 minutes Engaged reading (read 03/17/23)
nautil.us | articles
2 minutes Engaged reading (read 08/12/25)
Mediation analysis revealed that cognitive offloading partially explains the negative relationship between AI reliance and critical thinking performance.Younger participants (17–25) showed higher dependence on AI tools and lower critical thinking scores compared to older age groups. Advanced educational attainment correlated positively with critical thinking skills, suggesting that education mitigates some cognitive impacts of AI reliance.
Policymakers might need to support digital literacy programs, warning individuals to critically evaluate AI outputs and equipping them to navigate technological environments effectively.
phys.org | articles
A study by Michael Gerlich at SBS Swiss Business School has found that increased reliance on artificial intelligence (AI) tools is linked to diminished critical thinking abilities. It points to cognitive offloading as a primary driver of the decline.
1 minutes Engaged reading (read 08/14/25)
robhorning.substack.com | articles
I am imagining a scenario in the near future when I will be working on writing something in some productivity suite or other, and as I type in the main document, my words will also appear in a smaller window to the side, wherein a large language model completes several more paragraphs of whatever I am trying to write for me, well before I have the chance to conceive of it.
6 minutes Engaged reading (read 08/25/25)
in terms of who controls the “means of summarization.” He argues that there will be “ferocious fights between those who want to make money from the summaries, and those who fear that their livelihoods are being summarized out of existence,” a conflict that is already playing out in various intellectual property disputes he lists.hereSummarization is such a plausible use case for consumer AI that political scientist Henry Farrell is able to analyze AI
robhorning.substack.com | articles
A recurring concern with LLMs is that tech companies will run out of data with which to train them and they will gradually become less and less useful. How would tech companies, which have perfected so many means of surveillance and data extraction, run out of data, especially when institutions
3 minutes Engaged reading (read 05/23/25)
In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”
techcrunch.com | articles
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
13 minutes Engaged reading (read 08/19/25)
thebulletin.org | articles
The spread of AI-powered surveillance systems has empowered governments seeking greater control with tools that entrench non-democracy.
2 minutes Engaged reading (read 02/28/25)
AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain.
found that o1-preview, faced with deactivation, disabled oversight mechanisms and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.researchersOf particular concern, Bengio says, is the emerging evidence of AI’s “self preservation” tendencies. To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated in December, when
time.com | articles
When sensing defeat in a match against a skilled chess bot, advanced models sometimes hack their opponent, a study found.
19 minutes Engaged reading (read 08/20/25)
. In the words of one interviewee: ‘sometimes it is just nice to not have to share information with friends who might judge me’.2023 study non-judgemental design – a feature frequently praised by users in a deliberate of computer systems and AI companions’ anonymityThese human-AI relationships can progress more rapidly than human-human relationships -– as some users say, sharing personal information with AI companions may feel safer than sharing with people. Such ‘accelerated’ comfort stems from both the perceived
and possibly through sharing user data for advertising.subscriptions of their products, but their for-profit status warrants close scrutiny. Developers can monetise users’ relationships with AI companions through highlight the positive effectsAI companion companies
. More seriously, the unchecked validation of unfiltered thoughts could undermine societal cohesion.personal growthWhile communicating with a non-judgemental companion may contribute to the mental health benefits that some users report, researchers have argued that sycophancy could hinder
, judgement and the fear of causing upset help to enforce vital social norms. There’s too little evidence to predict if or how the widespread use of sycophantic AI companions might affect such norms. However, we can make instructive hypotheses on human relations with companions by considering echo chambers on social media.Disagreement
adalovelaceinstitute.org | articles
What are the possible long-term effects of AI companions on individuals and society?
6 minutes Engaged reading (read 08/11/25)
bbc.com | articles
Artificial intelligence is only as good as the data it learns from. But what if that data is biased?
1 minutes Engaged reading (read 08/19/25)
bbc.com | articles
Policymakers should address the "troubling trend", says the organisation's managing director Kristalina Georgieva.
2 minutes Engaged reading (read 08/25/25)
"We start with the companies' 'true north.' What is their business strategy?" Varshney said. "From there, you break that down into the organization's processes and workflows. Eventually you'll find key high-value workflows and for those you figure out how to apply the right blend of AI, automation, and generative AI. Grounding AI models in your workflow, your processes, and your enterprise data is what creates value."
These champions will require support, too. They need training in AI ethics, the right set of tools available, and a culture in which AI will truly augment their workflow.
Developing AI leadership is not simply a matter of adopting AI and cloud services and connecting data silos. To successfully embrace the opportunities of AI, organizations must first draw up a strategic vision that is going to have a real impact. Once they do, they can deploy technology in ways that augment human intelligence. And with the right buy-in from executives at the top, an organization can follow its roadmap to true AI leadership.
businessinsider.com | articles
In the race for AI adoption, the winners will be those who align the technology with strategic business goals.
3 minutes Engaged reading (read 09/03/25)
cnbc.com | articles
OpenAI CEO Sam Altman thinks the artificial intelligence market is in a bubble, similar to the dotcom bubble, he recently told reporters.
6 minutes Engaged reading (read 07/29/25)
is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," from creatorsOne of the top complaints and concerns
cnet.com | articles
In the new book The AI Con, AI critics Emily Bender and Alex Hanna break down the smoke and mirrors around generative AI.
2 minutes Engaged reading (read 08/11/25)
cnn.com | articles
A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.
24 minutes Engaged reading (read 08/25/25)
darioamodei.com | articles
4 minutes Engaged reading (read 09/10/25)
devdiscourse.com | articles
Another major concern is disinformation. AI-generated content is becoming indistinguishable from human-authored narratives, enabling scalable, personalized propaganda campaigns that can erode democratic institutions, polarize societies, and compromise public trust in reliable information sources. Simultaneously, surveillance technologies, powered by facial recognition and data inference systems, threaten civil liberties by empowering state and corporate actors to monitor, manipulate, and suppress dissent on unprecedented scales.
11 minutes Engaged reading (read 07/28/25)
would be the version of AI that would finally hit prime time?thisWho could possibly have guessed that outrageous. Exactly what your mobile phone does. As an AI researcher, working in this field for more than 30 years, I have to say I find this rather galling. Actually, it’s prompt completion. Right now, the future of trillion-dollar companies is at stake. Their fate depends on…
freethink.com | articles
LLMs might be one ingredient in the recipe for true artificial general intelligence, but they are surely not the whole recipe.
6 minutes Engaged reading (read 08/06/25)
Leaders must create a credible, engaging narrative for AI implementation that addresses employee concerns and fosters buy-in, energy and engagement. When employees strongly agree that leaders have communicated a clear plan for AI implementation, they are 2.9 times as likely to feel very prepared to work with AI and 4.7 times as likely to feel comfortable using AI in their role.
The plan should include clear guidelines that define how and where AI tools will be applied and empower employees to experiment with AI to do their jobs differently and better. It should also address the need for role-specific training so that employees can harness the full potential of the AI tools at their disposal.
gallup.com | articles
Want broader AI buy-in at your organization? Consider the role culture plays in your AI strategy.
16 minutes Engaged reading (read 08/05/25)
Data leakageData exfiltrationUnchecked surveillance and biasUse of data without permissionCollection of data without consentCollection of sensitive data
High-risk AI systems must comply with specific requirements, such as adopting rigorous data governance practices to ensure that training, validation and testing data meet specific quality criteria.Law enforcement use of real-time remote biometric identification systems in public (unless an exception applies, and pre-authorization by a judicial or independent administrative authority is required)Untargeted scraping of facial images from the internet or CCTV for facial recognition databases; andThough the EU AI act doesn't specifically have separate, prohibited practices on AI privacy, the act does enforce limitations on the usage of data. Prohibited AI practices include: prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.the EU AI ActConsidered the world's first comprehensive regulatory framework for AI, The EU Artificial Intelligence (AI) Act
Reporting on data collection and storageProviding more protection for data from sensitive domainsFollowing security best practicesSeeking and confirming consentLimiting data collectionConducting risk assessments
ibm.com | articles
AI arguably poses a greater data privacy risk than earlier technological advancements, but the right software solutions can address AI privacy concerns.
13 minutes Engaged reading (read 08/19/25)
linkedin.com | articles
1 minutes Engaged reading (read 01/29/24)
livescience.com | articles
AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
3 minutes Engaged reading (read 08/14/25)
livescience.com | articles
A survey of workers who use AI has revealed the tools could be slowly impairing our critical thinking skills.
5 minutes Engaged reading (read 08/05/25)
What makes these risks challenging to manage is over half (57%) of employees say they hide their use of AI and present AI-generated work as their own.Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).However, the use of AI at work is also creating complex risks for organisations. Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.
This complacent use could be due to governance of responsible AI trailing behind. Only 47% of employees say they have received AI training and only 40% say their workplace has a policy or guidance on generative AI use.
mbs.edu | articles
New global study reflects the tension between the obvious benefits of artificial intelligence and the perceived risks. Read more
6 minutes Engaged reading (read 07/30/25)
mozillafoundation.org | articles
Do AI chatbots spook your privacy spidey sense? You’re not alone! Here’s how you can protect more of your privacy while using ChatGPT and other AI chatbots.
4 minutes Engaged reading (read 08/25/25)
nbcnews.com | articles
Artificial intelligence, social media bots, Alexa, Siri and other voice-controlled computers could end up ruling — and ruining — our lives.
1 minutes Engaged reading (read 08/25/25)
“Luddites want technology—the future—to work for all of us,” he told the .Guardian
“If cognitive workers are more efficient, they will accelerate technical progress and thereby boost the rate of productivity growth—in perpetuity,”
in December that A.I. was being used “too much for automation and not enough for providing expertise and information to workers.” In a subsequent article, he acknowledged A.I.’s potential to improve decision-making and productivity, but warned that it would be detrimental if it “ceaselessly eliminates tasks and jobs; overcentralizes information and discourages human inquiry and experiential learning; empowers a few companies to rule over our lives; and creates a two-tier society with vast inequalities and status differences.” In such a scenario, A.I. “may even destroy democracy and human civilization as we know it,” Acemoglu cautioned. “I fear this is the direction we are heading in.”MIT NewsRecently, however, some prominent economists have offered darker perspectives. Daron Acemoglu, an M.I.T. economist and a Nobel laureate, told
newyorker.com | articles
John Cassidy considers the history of the Luddite movement, the effects of innovation on employment, and the future economic disruptions that artificial intelligence might bring.
13 minutes Engaged reading (read 07/28/25)
When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
newyorker.com | articles
The noted speculative-fiction writer Ted Chiang on OpenAI’s chatbot ChatGPT, which, he says, does little more than paraphrase what’s already on the Internet.
1 minutes Engaged reading (read 03/20/23)
So why are they ending up in search first? Because there are gobs of money to be made in search. Microsoft, which desperately wanted someone, anyone, to talk about Bing search, had reason to rush the technology into ill-advised early release. “The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”
“They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”told meNor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic,
These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment. What is advertising, at its core?describedThat’s where things get scary. Roose
A.I. researchers get annoyed when journalists anthropomorphize their creations, attributing motivations and emotions and desires to the systems that they do not have, but this frustration is misplaced: They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
I found the conversation less eerie than others. “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him.
Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human.
One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation.
What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.
as it always doesWe are talking so much about the technology of A.I. that we are largely ignoring the business models that will power it. That’s been helped along by the fact that the splashy A.I. demos aren’t serving any particular business model, save the hype cycle that leads to gargantuan investments and acquisition offers. But these systems are expensive and shareholders get antsy. The age of free, fun demos will end, . Then, this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users. It already is.
“The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”
We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
nytimes.com | articles
Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try.
1 minutes Engaged reading (read 08/19/25)
persuasion.community | articles
It’s a mistake to outsource our creative and critical thinking tasks to AI.
14 minutes Engaged reading (read 08/25/25)
poynter.org | articles
The buzzy new AI tool can quickly create entire news organizations out of thin air. Should we be freaking out?
10 minutes Engaged reading (read 08/25/25)
quantamagazine.org | articles
Large language models like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors.
7 minutes Engaged reading (read 07/28/25)
scientificamerican.com | articles
It’s important that we use accurate terminology when discussing how AI chatbots make up information
5 minutes Engaged reading (read 09/02/25)
staysafeonline.org | articles
A guide to establishing and maintaining safe words to protect you, your family, and your workplace against AI scams and deepfakes.
7 minutes Engaged reading (read 08/05/25)
, discriminatory patterns, and lax moderation standards.echo chambers — it reinforces and systematizes it. These models absorb not only language and behavior but also the underlying dynamics of the platforms themselves, including their ideological biasWhen generative AI is trained on toxic or poorly moderated social media platforms, it doesn’t just reflect
notes, the absence of culturally competent moderation deepens these divides and exposes already at-risk communities to greater harm.Access Now has shown, AI moderation tools often fail in Global South languages, allowing harmful content to thrive. In such environments, generative AI can further marginalize vulnerable groups and be misused to silence dissenting voices, reinforcing digital inequalities. As ARTICLE 19The risks are even more pronounced in non-English contexts. As
The European experience with the General Data Protection Regulation (GDPR) offers valuable insights. Under GDPR, the use of personal data for purposes vastly different from its original context — such as using social media interactions to train systems — faces significant legal hurdles.commercial AI
Without clear regulatory guardrails, this power shift risks deepening inequality, weakening privacy protections, and chilling freedom of expression across digital spaces. As platforms evolve from hosts of content to architects of generative systems, we must urgently reconsider how user data is governed — and who gets to decide what digital futures look like.This fusion of generative AI and social media marks a transformation — not just in how content is created, but in who controls meaning. The centralization of AI within social media companies grants them unprecedented power to mold discourse, curate narratives, and structure digital life itself.At the crossroads
techpolicy.press | articles
The centralization of AI within social media companies grants them unprecedented power to govern user feeds and data, writes Ameneh Dehshiri.
7 minutes Engaged reading (read 08/25/25)
to antitrust lawyers warning about the rise of algorithmic collusion. “Is it okay for a guy named Bob to collect confidential price strategy information from all the participants in a market and then tell everybody how they should price?” she asked. “If it isn’t okay for a guy named Bob to do it, then it probably isn’t okay for an algorithm to do it either.”speech, then–Federal Trade Commission Chair Maureen Ohlhausen gave a n 2017
Price-fixing, in other words, has entered the algorithmic age, but the laws designed to prevent it have not kept up.
San Francisco a first-of-its-kind ordinance banning “both the sale and use of software which combines non-public competitor data to set, recommend or advise on rents and occupancy levels.”passed
theatlantic.com | articles
Algorithmic collusion appears to be spreading to more and more industries. And existing laws may not be equipped to stop it.
15 minutes Engaged reading (read 09/03/25)
thecable.ng | articles
19 minutes Engaged reading (read 05/09/25)
Political theorists sometimes talk about the “resource curse”, where countries with abundant natural resources end up more autocratic and corrupt – Saudi Arabia and the Democratic Republic of the Congo are good examples. The idea is that valuable resources make the state less dependent on its citizens. This, in turn, makes it tempting (and easy) for the state to sideline citizens altogether. The same could happen with the effectively limitless “natural resource” of AI. Why bother investing in education and healthcare when human capital provides worse returns?
Truly something to think about.
Once AI can replace everything that citizens do, there won’t be much pressure for governments to take care of their populations. The brutal truth is that democratic rights arose partly due to economic and military necessity, and to ensure stability. But those won’t count for much when governments are funded by taxes on AIs instead of citizens, and when they too start replacing human employees with AIs, all in the name of quality and efficiency. Even last resorts such as labour strikes or civil unrest will gradually become ineffective against fleets of autonomous police drones and automated surveillance.
The most disturbing possibility is that this might all seem perfectly reasonable to us. The same AI companions that hundreds of thousands of people are already falling for in their current primitive state will be making ultra-persuasive, charming, sophisticated and funny arguments for why our diminishing relevance is actually progress. AI rights will be presented as the next big civil rights cause. The “humanity first” camp will be painted as being on the wrong side of history.
Luddite movement 2.0 may be needed!
If we start seeing signs of a crisis, we need to be able to step in and slow things down, especially in cases where individuals and groups benefit from things that harm society overall.
theguardian.com | articles
The end of civilisation might look less like a war, and more like a love story. Can we avoid being willing participants in our own downfall?
17 minutes Engaged reading (read 08/19/25)
And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI – far from living up to all those utopian hallucinations – is much more likely to become a fearsome tool of further dispossession and despoilation. humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.benefitThere is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to
and policymakers throw up their hands.courtsThe trick, of course, is that Silicon Valley routinely calls theft “disruption” – and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the
to use generative AI to sell more products, it’s becoming all too clear that this new tech will be used in the same ways as the last generation of digital tools: that what begins with lofty promises about spreading freedom and democracy ends up micro targeting ads at us so that we buy more useless, carbon-spewing stuff.huge investments of carbon emissions. Second, as companies like Coca-Cola start making sourceClear away the hallucinations and it looks far more likely that AI will be brought to market in ways that actively deepen the climate crisis. First, the giant servers that make instant essays and artworks from chatbots possible are an enormous and growing
ause we do not live in the Star Trek-inspired rational, humanist world that Altman seems to be hallucinating. We live under capitalism, and under that system, the effects of flooding the market with technologies that can plausibly perform the economic tasks of countless working people is not that those people are suddenly free to become philosophers and artists. It means that those people will find themselves staring into the abyss – with actual artists among the first to fall.
theguardian.com | articles
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
4 minutes Engaged reading (read 09/02/25)
theguardian.com | articles
A perverse information ecosystem is being mined by big tech for profit, fooling the unwary and sending algorithms crazy, says Guardian columnist Nesrine Malik
1 minutes Engaged reading (read 08/19/25)
theguardian.com | articles
AI will do the thinking, robots will do the doing. What place do humans have in this arrangement – and do tech CEOs care? says Ed Newton-Rex, founder of Fairly Trained
1 minutes Engaged reading (read 07/29/25)
Early AIs didn’t know much about the world, and academic departments lacked the computing power to exploit them at scale. The difference today is not intelligence, but data and power. The big tech companies have spent 20 years harvesting vast amounts of data from culture and everyday life, and building vast, energy-hungry data centres filled with ever more powerful computers to churn through it. What were once creaky old neural networks have become super-powered, and the gush of AI we’re seeing is the result.
AI is now engaging with the underlying experience of feeling, emotion and mood, and this will allow it to shape and influence the world at ever deeper and more persuasive levels.
The belief in this kind of AI as actually knowledgeable or meaningful is actively dangerous. It risks poisoning the well of collective thought, and of our ability to think at all. If, as is being proposed by technology companies, the results of ChatGPT queries will be provided as answers to those seeking knowledge online, and if, as has been proposed by some commentators, ChatGPT is used in the classroom as a teaching aide, then its hallucinations will enter the permanent record, effectively coming between us and more legitimate, testable sources of information, until the line between the two is so blurred as to be invisible. Moreover, there has never been a time when our ability as individuals to research and critically evaluate knowledge on our own behalf has been more necessary, not least because of the damage that technology companies have already done to the ways in which information is disseminated. To place all of our trust in the dreams of badly programmed machines would be to abandon such critical thinking altogether.
Excellent point and a reason for readocracy and thinking through AI.
I technologies are too. Training a single AI model – according to research published in 2019 – might emit the equivalent of more than 284 tonnes of carbon dioxide, which is nearly five times as much as the entire lifetime of the average American car, including its manufacture. These emissions are expected to grow by nearly 50% over the next five years, all while the planet continues to heat up, acidifying the oceans, igniting wildfires, throwing up superstorms and driving species to extinction. It’s hard to think of anything more utterly stupid than artificial intelligence, as it is practised in the current era.bad for the planet
The In Aotearoa New Zealand, a small non-profit radio station called Te Hiku Media, which broadcasts in the Māori language, decided to address this disparity between the representation of different languages in technology. Its massive archive of more than 20 years of broadcasts, representing a vast range of idioms, colloquialisms and unique phrases, many of them no longer spoken by anyone living, was being digitised, but needed to be transcribed to be of use to language researchers and the Māori community. In response, the radio station decided to train its own speech recognition model, so that it would be able to “listen” to its archive and produce transcriptions., while lesser-known ones are drained of exposure and expertise.increase their power that one disappears every two weeks, and with that disappearance goes generations of knowledge and experience. This problem, the result of colonialism and racist assimilation policies over centuries, is compounded by the rising dominance of machine-learning language models, which ensure that popular languages UN estimates
The lesson of the current wave of “artificial” “intelligence”, I feel, is that intelligence is a poor thing when it is imagined by corporations. If your view of the world is one in which profit maximisation is the king of virtues, and all things shall be held to the standard of shareholder value, then of course your artistic, imaginative, aesthetic and emotional expressions will be woefully impoverished.
theguardian.com | articles
The long read: Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous
10 minutes Engaged reading (read 09/03/25)
We just want more researchers to acknowledge and be aware of potential misuse. When you start working in the chemistry space, you do get informed about misuse of chemistry, and you’re sort of responsible for making sure you avoid that as much as possible. In machine learning, there’s nothing of the sort. There’s no guidance on misuse of the technology.
theverge.com | articles
AI could be just as effective in developing biochemical weapons as it is in identifying helpful new drugs, researchers warn.
12 minutes Normal reading (read 05/16/25)
Given that higher-paying white-collar jobs cost the company more in salary, these jobs make sense for AI to replace asap. From a cost savings point of view, it's not just a low-level job that is in danger.
youtube.com | videos
Support Grey making videos: https://www.patreon.com/cgpgrey## Robots, Etc:Terex Port automation: http://www.terex.com/port-solutions/en/products/new-equipmen...
5 minutes Normal reading (read 08/25/25)
youtube.com | videos
Author of NEXUS: A Brief History of Information Networks from the Stone Age to AI, Yuval Noah Harari, joins Morning Joe to continue the conversation on the p...
12 minutes Normal reading (read 08/21/25)
youtube.com | videos
Generative A.I is the nuclear bomb of the Information Age. If the Internet doesn’t feel as creative or fun or interesting as you remember it, you’re not alone. The so-called ‘Dead Internet Theory’ explains why. The rise of artificially generated content killed the internet. How did this happen? Why? And… Is it still possible to stop it? Follow Taylor here: @TaylorLorenz 00:00 Intro 01:50 Dead Internet Theory 11:41 Unforeseen Consequences 💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: https://www.patreon.com/kylehill 👕 NUCLEAR WASTE WARNING MERCH OUT NOW! https://shop.kylehill.net 🎥 SUB TO THE GAMING CHANNEL: https://www.youtube.com/channel/UCfTNPE8mXGBZPC1nfVtOJTw ✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS 📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA: 📷 https://www.instagram.com/sci_Phile/ 😎: Kyle 🎬: Charles Shattuck 🎞: Kevin Onofreo ✂: Nate Berger 🤖: @clairemax 🎨: Thorsten Denk https://www.z1mt.com/ 🎼: @mey 🎹: bensound.com 🎨: Mr. Mass https://youtube.com/c/MysteryGiftMovie 🎵: freesound.org
6 minutes Normal reading (read 08/25/25)
youtube.com | videos
Laurie Segall agreed to be part of an event where a pair of technologists commanded open source A.I. platforms to create a campaign of misinformation about h...
49 minutes Normal reading (read 08/25/25)
youtube.com | videos
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught i...