this post was submitted on 12 Aug 2024
243 points (98.0% liked)

Technology

58009 readers
3042 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] NeoNachtwaechter@lemmy.world 60 points 1 month ago (3 children)

Why this soft-spoken tone?

Killer robots must be banned, period.

[–] pennomi@lemmy.world 46 points 1 month ago (5 children)

Whoever bans them will be at a disadvantage militarily. They will never be banned for this one reason alone.

[–] Telorand@reddthat.com 12 points 1 month ago

I think you're conflating a ban to include banning their production (not an unreasonable assumption). As we've seen with nukes, however, possession of a banned weapon is sometimes as good as using it.

[–] catloaf@lemm.ee 10 points 1 month ago

I'm guessing the major countries will ban them, but still develop the technology, let other countries start using it, then say "well everyone else is using it so now we have to as well". Just like we're seeing with mini drones in Ukraine. The US is officially against automated attacks, but we're supporting a country using them, and we're developing full automation for our own aircraft.

[–] NeoNachtwaechter@lemmy.world 9 points 1 month ago* (last edited 1 month ago) (1 children)

Whoever bans them will be at a disadvantage militarily.

...and exactly this way of thinking will one day create "Skynet".

We need to be (or become) smarter than that!

Otherwise mankind is doomed.

[–] pennomi@lemmy.world 8 points 1 month ago (3 children)

Unfortunately this is basic game theory, so the “smart” thing is to have the weapons, but avoid war.

Once we’ve grown past war, we can disarm, but it couldn’t happen in the opposite order.

[–] JayDee@lemmy.ml 4 points 1 month ago (7 children)

The process of collective disarming is the path towards growing past war. And that first step is the collective banning of manufacturing such weapons.

load more comments (7 replies)
[–] NeoNachtwaechter@lemmy.world 4 points 1 month ago (2 children)

Once we’ve grown past war,

But what until then? Your ideas do not provide any solutions. You just say that it is unavoidable as it is.

[–] balder1991@lemmy.world 7 points 1 month ago (1 children)

Because there’s no solution that we know of.

[–] NeoNachtwaechter@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

But now you know it because I have told you in the first comments.

[–] nilloc@discuss.tchncs.de 3 points 1 month ago (1 children)

Game theory says your idea isn’t a solution because the actors will disobey.

[–] NeoNachtwaechter@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Game theory says

And I say this is no child's play. We need to get serious, and maybe we need to get smarter than anybody else before us.

[–] pennomi@lemmy.world 3 points 1 month ago

I don’t think I’m smart enough to solve “world peace” lol.

[–] technocrit@lemmy.dbzer0.com 1 points 1 month ago* (last edited 1 month ago)

"Basic game theory" says we should destroy this wacko system. jfc.

TBH these kinds of sloppy arguments are a big part of why game theory is a joke. It's fine as math (apart from misleading terminology) but a major problem is applying it to situations that are definitely not "games".

For example killer robots are not a game in any mathematically meaningful sense. The situation has been to be maximally simplified into a game between two people in order to reduce the situation into a simplistic analogy. This is neither science nor math. It's no reason to condone killer robots.

[–] technocrit@lemmy.dbzer0.com 2 points 1 month ago (1 children)

Ban the state first. Every state. These wacko cultists are literally destroying the planet so they can control people with killer robots.

[–] pennomi@lemmy.world 1 points 1 month ago

Yeah totally agree. The general population almost never wants to go to war - the plutocrats do.

Once we take care of our own corrupt governance I suspect wars will rapidly disappear, and then weapons will likewise disappear.

[–] Angry_Autist@lemmy.world 2 points 1 month ago

Once combat AI exceeds humans:

A ban to all war, globally. Those that violate the ban will have autonomous soldier deployed on their soil.

This is the only way it will work, no other path leads to a world without autonomous warbots. We can ban them all we want but there will be some terrorist cell with access to arduinos that can do the same in a garage. And China will never follow such a ban

[–] tal@lemmy.today 6 points 1 month ago* (last edited 1 month ago) (2 children)

I mean, most complex weapons systems have been some level of robot for quite a while. Aircraft are fly-by-wire, you have cruise missiles, CIWS systems operating in autonomous mode pick out targets, ships navigate, etc.

I don't expect that that genie will ever go back in the bottle. To do it, you'd need an arms control treaty, and there'd be a number of problems with that:

  • Verification is extremely difficult, especially with weapons that are optionally-autonomous. FCAS, for example, the fighter that several countries in Europe are working on, is optionally-manned. You can't physically tell by just looking at such aircraft whether it's going to be flown by a person or have an autonomous computer do so. If you think about the Washington Naval Treaty, Japan managed to build treaty-violating warships secretly. Warships are very large, hard to disguise, can be easily distinguished externally, and can only be built and stored in a very few locations. I have a hard time seeing how one would manage verification with autonomy.

  • It will very probably affect the balance of power. Generally-speaking, arms control treaties that alter the balance of power aren't going to work, because the party disadvantaged is not likely to agree to it.

I'd also add that I'm not especially concerned about autonomy specifically in weapons systems.

It sounds like your concern, based on your follow-up comment, is that something like Skynet might show up -- the computer network in the Terminator movie series that turn on humans. The kind of capability you're dealing with isn't on that level. I can imagine one day, general AI being an issue in that role -- though I'm not sure that it's the main concern I'd have, would guess that dependence and then an unexpected failure might be a larger issue. But in any event, I don't think that it has much to do with military issues -- I mean, in a scenario where you truly had an uncontrolled, more-intelligent-than-humans artificial intelligence running amok on something like the Internet, it isn't going to matter much whether-or-not you've plugged it into weapons, because anything that can realistically fight humanity can probably manage to get control of or produce weapons anyway. Like, this is an issue with the development of advanced artificial intelligence, but it's not really a weapons or military issue. If we succeed in building something more-intelligent than we are, then we will fundamentally face the problem of controlling it and making something smarter than us do what we want, which is kind of a complicated problem.

The term coined by Yudkowsky for this problem is "friendly AI":

https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

Friendly artificial intelligence (also friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

It's not an easy problem, and I think that it's worth discussion. I just think that it's mostly unrelated to the matter of making weapons autonomous.

[–] model_tar_gz@lemmy.world 4 points 1 month ago* (last edited 1 month ago) (1 children)

Reward models (aka reinforcement learning) and preference optimization models can come to some conclusions that we humans find very strange when they learn from patterns in the data they’re trained on. Especially when those incentives and preferences are evaluated (or generated) by other models. Some of these models could very well could come to the conclusion that nuking every advanced-tech human civilization is the optimal way to improve the human species because we have such rampant racism, classism, nationalism, and every other schism that perpetuates us treating each other as enemies to be destroyed and exploited.

Sure, we will build ethical guard rails. And we will proclaim to have human-in-the-loop decision agents, but we’re building towards autonomy and edge/corner-cases always exist in any framework you constrain a system to.

I’m an AI Engineer working in autonomous agentic systems—these are things we (as an industry) are talking about—but to be quite frank, there are not robust solutions to this yet. There may never be. Think about raising a teenager—one that is driven strictly by logic, probabilistic optimization, and outcome incentive optimization.

It’s a tough problem. The naive-trivial solution that’s also impossible is to simply halt and ban all AI development. Turing opened Pandora’s box before any of our time.

load more comments (1 replies)
load more comments (1 replies)
[–] Angry_Autist@lemmy.world 1 points 1 month ago (2 children)

Ok so we ban them, and some incel terminally online hacker on steroids turns 20 arduinos into bombs.

I agree killer robots are dangerous and ethically problematic, just I don't think banning them will keep asshats from making them, including on large scale.

China could pump them out by the billions and we'd probably not know till they were deployed.

load more comments (2 replies)
[–] Ilandar@aussie.zone 35 points 1 month ago* (last edited 1 month ago) (2 children)

The US is conducting live AI experiments on the people of the Global South and exposing some of the poorest and most vulnerable people in the world to dangerous technology it won't use on it's own citizens.

[–] msage@programming.dev 13 points 1 month ago (1 children)

To use on it's own citizen later

[–] geneva_convenience@lemmy.ml 4 points 1 month ago (2 children)

Which is the only point at which Americans will start to care.

[–] technocrit@lemmy.dbzer0.com 8 points 1 month ago

USAians already don't care about any form of violence as long as it's used on minorities, refugees, prisoners, unhoused, etc.

[–] Crikeste@lemm.ee 5 points 1 month ago

Americans will never care until it starts happening to them.

Outside our borders? Outside my interest in caring.

Just look at all the liberals selfishly attacking leftists for withholding their votes until the genocide stops. “we nEeD to sAvE MY ‘dEmOcRaCy’, oUr ‘dEmOcRaCy’ is mOrE iMpOrTaNt tHaN oUr bOmBs dIsMeMbErRiNg cHiLdReN!”

[–] LapGoat@pawb.social 7 points 1 month ago

it wont use on its own citizens yet

[–] troyunrau@lemmy.ca 23 points 1 month ago (2 children)

I'm actually surprised it has taken this long.

What surprises me even more is that organized crime hasn't gotten on board much (yet). Like, screw drive by shootings -- drone dropped grenades on rival gangs and such.

Or that drones haven't been used for "school shooting" type mass casualty attacks.

Or that foreign countries haven't snuck in with a sea can full of drones which fan out and attack infrastructure.

Imagine a cruise missile as a drone carrier that just scatters anti-personnel drones along a flight path, each just finding a person indiscriminately.

If there's anything that Ukraine is teaching us, it's that we don't have countermeasures (yet). The autonomous versions are even scarier.

[–] einkorn 8 points 1 month ago

We do have countermeasures, however many countries mothballed them because we thought them obsolete.

The Gepard which has been proven to be invaluable in a close range AA role, is being pulled from scrapyards. Yes, the radar resolution has to be increased to effectively track small single use drones, but the technology is there.

load more comments (1 replies)
[–] TheReturnOfPEB@reddthat.com 21 points 1 month ago* (last edited 1 month ago) (2 children)

There are also a number of ethical concerns associated with autonomous weapons.

That being five sentences after the sentence

There are worries that these weapons could fall into the hands of terrorist groups if their deployment in Africa is scaled up.

[–] Gsus4@mander.xyz 9 points 1 month ago (1 children)

I'm worried that the facebooks and elons out there will build private armies of these, not even the terrorists.

[–] Curious_Canid@lemmy.ca 6 points 1 month ago

That seems to imply that Facebook and Elon are not terrorists. I could make a reasonable argument for Facebook. Elon, I think, has already established his credentials with multiple acts that have led to riots and other violence.

[–] rottingleaf@lemmy.world 3 points 1 month ago

Tuareg nationalists and even Islamic groups in Sahel are not terrorists. UN member states they are fighting against are. That would include France, Russia and who not.

So no, more egalitarian weapon technologies are a good thing. Not ethical concern for sure. If they don't have ethical concerns over jets and tanks.

[–] shalafi@lemmy.world 15 points 1 month ago* (last edited 1 month ago) (5 children)

Probably going to get on a list here...

Imagine how easy it would be to setup an even dozen drones in a pickup bed. Drive to a political rally, pop the bed cover, launch, drive away.

Feed the AI dozens of your target's images and let slip the dogs of war. Or, even lower tech, have someone controlling an overwatch drone and paint your target with a laser. The drones themselves could be cheap as hell, as long as they have a camera feed going back to, uh, some automagical targeting system. Maybe just point a cell phone at the target as if taking a picture?

Only defense I got is a powerful, wide-spectrum frequency jammer. No idea what the legalities look like for the government using them as defensive platforms. I doubt there are laws concerning such tactics.

Am I oversimplifying this? Devil and details and such? Comment and join me on the government's list!

Another thought on drone defense, maybe someone can comment. Why aren't the Russians and Ukrainians carrying 20-gauge anti-drone shotguns? A single-shot unit with a short barrel is super light and the very definition of reliabilty. Seems ideal given that you can tweak a shotgun load 1,000 different ways for spread, distance and weight.

Don't know the ideal combat range, but I've got 8 shotguns of various sorts and I can get any sort of load, anywhere I want. Playing at my range, it's fun to see what I get with different barrel lengths, chokes and charges. If you really want cheap, I've loaded homemade black powder and gravel. LOL, pretty crappy and messy, but it might do for a drone. Bonus! Now you've make a giant smokescreen!

For example, I've got an absolute POS single-shot 20 that weighs nothing, folds in half, never fails to fire and cost about $100. Even has a cheapo red-dot on it, point and click interface. Probably take a day of testing, and a shitload of varied ammo, to shape up an anti-drone weapon. And while we're at it, I have a 1920s single-shot 20 that would get the job done. Lightweight and you can snap the barrel on and off in seconds, 3 parts total.

You can even get fancy and make the choke adjustable by twisting. I have such a shotgun from the 1950s, nothing new here. Choke too tight and you missed? Now it's closer? Yank the choke off and go wide with it.

Training young soldiers should be easy enough. My neighbor's 22-yo wife is hell on wheels with her 20-gauge over-and-under. She's shooting skeet at twice the range I see Russians dying from.

So again, why not load the soldiers with such a rig? At least 1 man per squad?

[–] CodeGameEat@lemmy.world 6 points 1 month ago* (last edited 1 month ago) (2 children)

I remember seeing this video a few years ago, and I'm really scared about drones technology, miniaturization and AI since... https://www.youtube.com/watch?v=9fa9lVwHHqg

[–] shalafi@lemmy.world 2 points 1 month ago (1 children)

Oh! I'm getting right after that! We'll probably watch Alter all night now.

Ever seen Uncanny Valley?

[–] CodeGameEat@lemmy.world 1 points 1 month ago

No, I'll watch it as soon as i have some time!

[–] Angry_Autist@lemmy.world 2 points 1 month ago

Thank you! I have been showing this video to people for years and so far not a single other person has gotten how terrifying this is.

[–] ivanafterall@lemmy.world 5 points 1 month ago

For whatever it's worth, everything they're about to do to you at Guantanamo is not who we are as a country.

[–] x00z@lemmy.world 3 points 1 month ago

Is this a copypasta?

[–] Eiri@lemmy.world 2 points 1 month ago

I imagine by the time you see the tiny drone and are able to aim at it, it's likely too late. And what if it's a kamikaze drone and the explosion is bigger than anticipated?

Telling your soldiers to shoot at that sounds riskier than "take cover as soon as you think there's a drone".

Anyway my understanding is that so far drones are more useful for destroying stuff than killing people.

A much simpler countermeasure to armed drones is a net.

As for surveillance drones... I'm not sure militarily speaking they care all that much. The enemy already could be watching them with satellites, high altitude drones or balloons that would be nearly impossible to detect, or plain old binoculars, anyway.

Unless it's a covert operation, in which case the enemy launching a drone to find you is already very bad.

load more comments (1 replies)
[–] marshadow@lemmy.world 13 points 1 month ago

The acronym "Laws" is a little too on the nose. I'd ask whether anyone involved in the development of these has seen the documentary film Robocop, but clearly they have and thought it was a great idea.

[–] bruhduh@lemmy.world 3 points 1 month ago

Terminator mbappe be like

load more comments
view more: next ›