expatriatedtexan
Habitual Line Stepper
- Aug 17, 2005
- 16,878
- 12,536
I think the key is as an end user, I want it clearly labled when AI took a guess, or approximated an answer rather than getting the answer from and established and somehow verifiable source.I saw a comment from a dude a couple weeks back - AI won't take your job, but the person that takes your job will know how to use AI. I definitely believe in this.
I use ChatGPT a lot as a software developer. It's good for generating code and it acts like a 2020/updated version of Stack Overflow for me. But all of its answers are not correct; even though it states them "matter as factly". That really bothers me. It definitely does need intervention. For me it's a tool to coach me towards an answer, but that answer should be given a huge grain of salt with it and a trust but verify.
Like if I ask what is sum of 2 + 2?
If the answer is anything other than 4, it needs to be flagged as being generated by AI or a common core math instructor. Either way, it's false.