If you think #AI can save you time you're wrong.
I'm working on a blog post and I started my research in Google as one does sometimes. It helpfully spit-out some accurate sounding bullshit (complete with sources at least).
First entry looks accurate enough and jibes well with my loose memory on the subject. So I'm thinking GREAT! This is going to save me so much time!
Second entry looks accurate too, but it's about something I know nothing about and it's strangely talking about a loosely related concept under the heading of another loosely related concept? Ok. The source is Wikipedia. This is good.
Go to Wikipedia.
Ok, the guy's name is there. Even listed under the right sub-section. But he is absolutely NOT what Google is telling me he is even if he ultimately did do what Google is telling me he did.
It's a blatant #mistake? lie? hallucination? that would cause me to lose all #credibility.
So there goes that idea.
At least Google gave me a list of #sources. That's helpful. So now I've got to go through all the sources and come up with something completely from scratch.
Which, if I'm being honest, is what I should have done originally.
Also... "AI responses may include mistakes." is not worded strongly enough.
It should read, "Today's AI generates answers that are inaccurate yet sound plausible, and are not based on any facts or reality. AI responses should not be relied on for any business, academic, scholar, legal, technical, medical, scientific, financial, historical, or military purposes."
But investors probably won't like that.