this post was submitted on 22 Aug 2023
1 points (100.0% liked)

AI Infosec

767 readers
1 users here now

Infosec news and articles related to AI.

founded 1 year ago
MODERATORS
 

Hi all,

Had a small chat on #AI with somebody yesterday, when this video came up: "10 Things They're NOT Telling You About The New AI" (*)

What strikes me the most on this video is not the message, but the way it is brought. It has all the prints of #disinformation over it, .. especially as it is coming from a youtube-channel that does not even post a name or a person.

Does anybody know this organisation and who is behind it?

Is this "you are all going to lose your job of AI and that's all due to " message new? What is the goal behind this?

(Sorry to post this message here. I have been looking for a lenny/kbin forum on disinformation, but did not find it, so I guess it is most relevant here)

(*) https://www.youtube.com/watch?v=qxbpTyeDZp0

top 7 comments
sorted by: hot top controversial new old
[–] ChaoticNeutralCzech@feddit.de 0 points 1 year ago (1 children)
  1. We don’t do hashtags here
  2. Don’t use footnotes for hyperlinks when [this syntax] (https://example.com/) is available
[–] kristoff@infosec.pub 0 points 1 year ago (1 children)

I added the hashtag to give the post more visibility on the fediverse. (and to give a little bit more visibility of Lemmy to the people on the mastodon side of the fediverse).

I didn't know anout the ability to add URLs inside the post.

but in general I avoid putting URLs inside the message post. Putting the URL at the end of a post allows people to first read the complete post and then follow the links. People tend to click on the link and get "stuck" there and not read the rest of the post. (especially on social-media platforms like youtube that is designed to lock people's attention as long as possible).

[–] ChaoticNeutralCzech@feddit.de 0 points 1 year ago

Guess what, there is syntax for footnotes[^1] now.

[^1]: GitHub reference on footnotes in Markdown

The web UI supports them but most apps don't (yet). Use at your own risk.

[–] howrar@lemmy.ca 0 points 1 year ago (1 children)

I don't know who's behind these videos. What I can tell you, as someone involved in AI research, is that the information is factual, but it's presented in a way that makes it sound a lot scarier than it actually is.

[–] kristoff@infosec.pub 0 points 1 year ago* (last edited 1 year ago) (1 children)

Hi,I I agree there is truth in these statements, but -as said- the tone is way more menacing then the reality.

I wonder why anybody is spending all this time and effort (and hence money) to produce this kind of content.

what is the goal? Scare people? Create uncertainty and fear?

What does strike me it that is talks about US companies, but does not say anything about companies from the rest of the world or state-driven projects.

So, hence also my question, is this a new fenomina (with the rise of ChatGPT?) or has this existed for long?

Kr.

[–] howrar@lemmy.ca 0 points 1 year ago (1 children)

Probably some dude just trying to make a living. Just like how Facebook isn't out there looking to sow chaos and subvert democracy, but it just so happens that content contributing to this gets more engagement and more income.

[–] ChaoticNeutralCzech@feddit.de 0 points 1 year ago

Facebook isn’t out there looking to sow chaos and subvert democracy

They ran emotional experiments on their users, knowing they hadn't really consented. I think Facebook and 𝕏 might be pushing some agenda, even though not necessarily an anti-democracy one.