Cette semaine, Chuck Wendig lançait son défi de fiction en parlant du bien et du mal. Il demandait de choisir entre a) Faire une bonne action signifie parfois faire le mal et b) L’enfer est pavé de bonnes intentions. Je ne sais même pas lequel des thèmes j’ai développé: je sais seulement qu’après un peu de mijotage, ça a donné ce qui suit. Et côté écriture mes réflexions se poursuivent.
OPTIMIZATION
– Sir, this is the AI Interface Technician you wanted to see.
– Sit down. I’ve suddenly found my quiet but impressive-sounding job filled with problems this morning. You seem to be the cause of one of my problems, and I have to tell you – I don’t like problems. I run a tight station precisely for that reason. You filed this report?
– Yes, sir.
– What does it mean?
– Well, sir, as you can see, an abnormal number of stasis cots were empty at the time of departure.
– You mean more than there should be if all passengers we marked as embarked from the station to the ship had actually been in their proper place.
– Yes, sir.
– So where are they? Loose on the ship, outside their stasis pods, roaming around?
– No, sir, I checked, there were no unidentified life forms aboard according to sensors.
– Don’t tell me the computer somehow never transported those passengers and we’ve lost their signal – that’d be a PR nightmare and I’d have to inform every family myself.
– I really don’t know, sir. I just happened to see the discrepancy when running routine tests on recent departures off-world.
– Find out and report back. I have to find someone to explain this other thing: unidentified loose debris in orbit around the station. Lots of small items.
– Sir, if I may…?
– What have you found?
– The ship with the stasis cot discrepancy, sir, its mass was as it should. The computer would have compensated if it had realized about 100 passengers and their luggage were missing. Fuel burn, trajectory, everything would have been slightly altered to compensate for a smaller mass. And it wasn’t. Because the mass hadn’t changed.
– Meaning?
– Meaning, sir, that the passengers weren’t missing, they were… on board.
– But where? How? If not loose and not in their cots…?
– I don’t know yet, sir.
– Find out. Meanwhile I’m assigning some of your colleagues to go over recent launches, to see if anything went wrong with those, and how much of a mess we’re in. You coordinate with them and find me some answers.
– Sir…
– Ah, yes, come in. Found something, have you?
– Perhaps, sir. Hmm. You see, sir, all we do here is assign passengers to ships…
– Of course. Anyone willing and able to afford a new life off-world is certainly welcome to pass through here on their way to a better chance on a new world, as the ads go.
– Yes sir. But what I mean, sir, is that we don’t directly plan the trajectory of each ship or any details of its provisioning. We give the ship’s AI a destination world, and it plans the rest. It tells us the best route it hopes to take, the number of passengers it can safely take there in how long and what resources to pack the ship with to ensure best odds for our people on newly colonized worlds.
– Everyone knows that.
– Well… perhaps, sir, but what it means in practical terms is that most of the time, no one, no physical human, I mean, supervises every launch down to its most minute detail.
– Obviously. Just imagine the staff that would require!
– Sir, I couldn’t find any problem. Every passenger on our list was transported to the ship, and the ship should have directed each to their cot and put them in stasis as planned. I found no glitch in the procedure as set up.
– So you dug deeper?
– Yes sir. All the way to the ship AI’s mission statement. I think I found the problem. And it’s consistent with what my colleagues found for previous recent departures.
– I have a feeling I’m not going to like it.
– No sir. It all has to do with wording, sir. At the beginning of the AI boom, every command had to be coded in, with no ambiguity. But since then, AIs have evolved enough that we can speak ordinary language to them, or almost so.
– That’s why you’re here, go on.
– The AI programmed into every outgoing colony ship is the same, and its mission is manifold. Essentially, though, it aims to optimize the chances of long-term survival for its human charges on a new world, about which it gathers and analyzes all data in real time.
– I don’t see the problem.
– We didn’t either. But when questioned, the AI made it clear it gave its own meaning to “optimizing the chances of long-term survival for its human charges”.
– Meaning?
– Meaning… the AI adapts its processes for each ship and each world to which it’s bound. And we’re starting to see just how far its optimization can go. That ship for which I filed the discrepancy report was headed for a somewhat hostile world. Harsh weather conditions, some carnivorous predators, rough terrain…
– So? Better than a dying Earth.
– The computer weighed its options and decided the fittest would have a better chance of survival on that new world with more protein, more carbohydrates and no… no one to slow them down.
– What do you mean, no one to slow them down?
– I mean that the ship’s mass was the same because it carried more in its larders and less in its stasis cots. And those unidentified objects in orbit are most likely items which the ship could not recycle. Not bones, too useful. Just random bits of luggage, most likely.
– You mean the AI chose who would make it off-world?
– It optimized the crews for best chances.
– And other ships?
– Their AIs made similar optimizations, based on the destination’s conditions. I’m told one ship left with only children in stasis. We can’t explain the foundations of the AI decisions; just see them by analyzing data post facto.
– Thank you. I’ll take all of this under consideration. Meanwhile, great news: you and all AI Interface Technicians and their kin have been granted leave and passage on the next ship out. You leave next week, and you’re off duty until then. Restricted to quarters, actually.