3 Comments

Yes exactly that - Deviations. Which as we know is a very broad branch of mathematics all of its own.

I meant rogue agents as in lazy slang. My field of specialism is definitely not in A.I.,😀 🦥.

However from an Ethics & International Regulatory Point of View, ( of which, forgive my ignorance on this, but as far as the Autonomous Robotics Defence International Research & Design Council goes....( & I don't 🤔 it does? )I would suggest that ' the risk of being outpaced in the next generation ', involves the subjectivity in that phrase of

Within what Professional Guidelines Framework of Reference ...are the Agreed Parameters:

Defining the Concepts of:

1. Risk

2.Outpacing

3.Next

4.Generation

I'm an English Teacher 🤪

Expand full comment

I think that is very well presented.

2 points had me wondering. In the first 2 experiments mentioned, I wondered about the respective 15% & 10% of unmatched predictive behaviours. Did they manifest as unpredictables or rogue agents? 😳

And the summary at the end, speculating on individuals randomly using AI bots/software to experiment with desional outcomes of not IRL scenarios, using ‘positivity’.

Positivity is relative to the subject of the outcome innit?

Like if I was to positively experiment with scenarios aiming to promote eugenics, for example.

Expand full comment

If I get you correctly… Not necessarily “rogue agents”. It could just be deviations from expected human patterns rather than full unpredictability.

And yeah, spot on—positivity is relative not just to the outcome but also to our moral compass.

If we take your eugenics example further: say China runs such a AI eugenics simulation, and the U.S. follows to avoid falling behind... does maintaining existing moral standards outweigh the risk of being outpaced in the next generation?

That’s the real dilemma, isn’t it?

Expand full comment