techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.6K
active users

#techethics

9 posts9 participants0 posts today

ChatGPT accidentally exposed OpenAI's deceptive business model: "GPT-5 is often just routing prompts between GPT-4o and o3."

Corporate AI marketing manufactures technological breakthroughs to extract premium pricing from commoditized infrastructure.

Classic Silicon Valley grift: rebrand existing products, multiply prices, rely on information asymmetry to exploit customers.

ONLYOFFICE introduces a flexible AI agent for document, presentation & spreadsheet editing—supporting cloud and local AI models 💻

As open-source users embrace AI, local-only options must be clearly highlighted & respected for privacy and control 🔒

This move is welcome—only if AI runs fully locally with transparent settings.

@ONLYOFFICE

news.itsfoss.com/onlyoffice-ai

It's FOSS News · Microsoft 365 Copilot Who? ONLYOFFICE’s AI Agent Might Suprise You!ONLYOFFICE's AI agent helps you write, edit, and work faster. And yes, you have the option to use local AI.

'Readiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework' - an article in Technology, Mind, and Behavior (TMB), published by the American Psychological Association, on #ScienceOpen:

➡️🔗 scienceopen.com/document?vid=8

ScienceOpenReadiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework<p xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="first" dir="auto" id="d2520162e165">While generative artificial intelligence (AI) may lead to technological advances in the mental health field, it poses safety risks for mental health service consumers. Furthermore, clinicians and health care systems must attend to safety and ethical considerations prior to deploying these AI-mental health technologies. To ensure the responsible deployment of AI-mental health applications, a principled method for evaluating and reporting on AI-mental health applications is needed. We conducted a narrative review of existing frameworks and criteria (from the mental health, health care, and AI fields) relevant to the evaluation of AI-mental health applications. We provide a summary and analysis of these frameworks, with a particular emphasis on the unique needs of the AI-mental health intersection. Existing frameworks contain areas of convergence (e.g., frequent emphasis on safety, privacy/confidentiality, effectiveness, and equity) that are relevant to the evaluation of AI-mental health applications. However, current frameworks are insufficiently tailored to unique considerations for AI and mental health. To address this need, we introduce the Readiness Evaluation for AI-Mental Health Deployment and Implementation (READI) framework for mental health applications. The READI framework comprises considerations of Safety, Privacy/Confidentiality, Equity, Effectiveness, Engagement, and Implementation. The READI framework outlines key criteria for assessing the readiness of AI-mental health applications for clinical deployment, offering a structured approach for evaluating these technologies and reporting findings. </p>

Update: This piece is getting some interesting pushback from parents who think I'm being alarmist about AI toys over on my other social platforms.
 
On the other side of that, I'm hearing so many people taking the usual "AI BAAAAD" stance, some of those people thinking I agree with them simply because I took a hardliner stance in this post.
 
For clarification: I'm not anti-AI. I use these tools daily for my research and writing as well as accessibility aids to offset some of the disadvantages I face due to my blindness. I study AI from the computer scientist perspective and am studying to be an elementary teacher precisely because I see AI's educational potential. I'm not even entirely against the idea of AI companionship, if it's framed right.
 
What actually bothers me is the business model. When Moxie robots suddenly "died" last year because the company went under, kids had to grieve their artificial friend. Parents got a scripted letter to explain why their $799 companion stopped talking which provided little comfort to kids who experienced digital abandonment. Trust me, the videos I've seen of kids crying because their beloved friend unexpectedly died over night is truly heartbreaking.
 
That's no glitch, that's what happens when you outsource childhood relationships to venture capital that only cares about investment returns.
 
The real question isn't whether AI toys are inherently bad. It's whether we're okay with corporations experimenting on our kids' emotional development while claiming it's "age-appropriate play."
 
What are your thoughts? Let me know in the comments.
 
open.substack.com/pub/kaylielf
 
 
 
 
#AIToys #ChildPrivacy #ChildDevelopment #DigitalRights #TechEthics #SurveillanceCapitalism #COPPA #DataPrivacy #ChildSafety #TechRegulation #DigitalLiteracy #ParentingInTheDigitalAge #EdTech #CorporateAccountability #TechCriticism #EthicalTech

open.substack.comAI toys promise magical childhood experiences, but they're collecting our children's deepest secrets for corporate profit. 🧸🤖How Six Decades of Cultural Conditioning Primed Us for the AI Childhood We Didn't Know We Were Asking For