Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Negative Entropy
Nov 30, 2009

Comrade Gorbash posted:

What you're calling too far, I'm calling the minimum distance they need to cover.

They already have a FATE adaptation for rules lite.

Adbot
ADBOT LOVES YOU

ProfessorCirno
Feb 17, 2011

The strongest! The smartest!
The rightest!
I'm not asking rules lite, I'm asking rules less density then the Earth itself.

Kai Tave
Jul 2, 2012
Fallen Rib

Kommando posted:

They already have a FATE adaptation for rules lite.

Turns out there's a vast middle ground between "FATE" and "basically Shadowrun, only with no dicepools but even more tedious chargen." Also lol at the idea that 4E isn't a crunchy game.

Comrade Gorbash
Jul 12, 2011

My paper soldiers form a wall, five paces thick and twice as tall.

ProfessorCirno posted:

I'm not asking rules lite, I'm asking rules less density then the Earth itself.
The iron core of the Earth itself is fluffy cotton candy compared to EP1's character creation rules.

Kai Tave
Jul 2, 2012
Fallen Rib
Even outside of chargen I found parts of EP's system unnecessarily crufty. What are there, like six separate combat skills for using ranged weapons? Seven if you count thrown weapons?

Axelgear
Oct 13, 2011

If I'm wrong, please don't hesitate to tell me. It happens pretty often and I will try to change my opinion if I'm presented with evidence.
Just to throw the topic back to the Ultimates for a moment here, something that bugs me about the whole discussion is that the Ultimates aren't fascists. Chauvinists, certainly, but fascism has a meaning that doesn't apply to them. The only similarities are at the surface level, in their militarism, chauvinism, and belief in autarky.

Philosophically, though, the Ultimates are radical individualists and egoists, as well as being nihilists; things that are inimical to fascist ideals of the sublimation of the individual to some greater whole or purpose. Fascists tend to worship the sacrifice of the individual, especially soldiers, to the whole, something that is incompatible with the mainstream Ultimate philosophy of rational self-interest. Ultimates see the individual as the ultimate worthy goal; as the only possible thing of having any value. Fascists see the individual as worthless except in the context of the greater whole.

The Jovians are fascists. Textbook fascists, really; idolizing some greater in-group (humanity) that must be defended against an out-group (everyone else), simultaneously declaring that in-group superior and yet that out-group as an apocalyptic threat. Using that out-group to justify seizures of power by the state, all in the name of security; using institutional power to discriminate against those who do not fit the mold of society's ideal citizen.

None of this is to come down on the issue one way or the other. I think the Ultimates should stay for the same reasons the Jovians should: They are an understandable reaction to the horrors of the Fall and make the setting more interesting and colourful with the kinds of questions it asks. That said, the Ultimates are bigoted chauvinist bullies, who often enjoy looking down on those they consider inferior. That's the part that gets uncomfortably close to reality for some. You don't have to be a fascist to wear jackboots.

Covok
May 27, 2013

Yet where is that woman now? Tell me, in what heave does she reside? None of them. Because no God bothered to listen or care. If that is what you think it means to be a God, then you and all your teachings are welcome to do as that poor women did. And vanish from these realms forever.

Kai Tave posted:

Even outside of chargen I found parts of EP's system unnecessarily crufty. What are there, like six separate combat skills for using ranged weapons? Seven if you count thrown weapons?

If we really want to go down this rabbit hole, how about the fact that there are language skills. You start with one at 90% but you have a muse that can translate every language anyway. What's the loving point? If my muse can translate every language, why the gently caress do I have a language skill? And why the gently caress isn't it at a hundred percent if I have a universal translator?

ProfessorCirno
Feb 17, 2011

The strongest! The smartest!
The rightest!
Language skills were, funny enough, way more forgivable because they were "knowledge skills," not "active skills." Or...whatever denotation EP gave them.

Kai Tave
Jul 2, 2012
Fallen Rib

ProfessorCirno posted:

Language skills were, funny enough, way more forgivable because they were "knowledge skills," not "active skills." Or...whatever denotation EP gave them.

It's still a fair point that it's a pretty odd decision to make something like that a skill you have to purchase with chargen currency when everybody has a highly sophisticated personal translator in their brains for free simply as part of existing. Of course the GM can contrive reasons to deprive you of your Muse at a crucial moment when you really need to know what someone is saying in Lunar Mandarin or whatever but I feel like there's probably a more elegant way to handle it.

ProfessorCirno
Feb 17, 2011

The strongest! The smartest!
The rightest!
Oh, I'm not saying it was a GOOD idea, hahahaha. Just that they managed to mitigate their bad idea, at least a little, in that case.

Kai Tave
Jul 2, 2012
Fallen Rib

ProfessorCirno posted:

Oh, I'm not saying it was a GOOD idea, hahahaha. Just that they managed to mitigate their bad idea, at least a little, in that case.

Right, in some games I actually think an active skill/knowledge skill split is a good thing because it gives everybody the resources to make their characters more well-rounded even if in practice a separate knowledge skill pool often becomes a roundabout way of simply writing a few of your character's hobbies and interests down on the sheet. But when you have an in-game conceit that everybody has Google Hyper-Translate installed in their head it becomes a lot sillier to not look at the skill list and ask yourself if you really need language skills, but it's the sort of unexamined blind spot I can imagine arising when you're a bunch of guys with a background in crunchy, crufty RPGs sitting down to make another crunchy, crufty RPG.

Crunch is good, but it should serve a purpose other than to simply pad out pages.

Helical Nightmares
Apr 30, 2009
*cackles madly*

https://www.technologyreview.com/s/608596/scientists-hack-a-computer-using-dna/

quote:

Scientists Hack a Computer Using DNA

Malware can be encoded into a gene and used to take over a computer program.

by Antonio Regalado August 10, 2017

In what appears to be the first successful hack of a software program using DNA, researchers say malware they incorporated into a genetic molecule allowed them to take control of a computer used to analyze it.

The biological malware was created by scientists at the University of Washington in Seattle, who call it the first “DNA-based exploit of a computer system.”

To carry out the hack, researchers led by Tadayoshi Kohno (“see “Innovators Under 35, 2007”) and Luis Ceze encoded malicious software in a short stretch of DNA they purchased online. They then used it to gain “full control” over a computer that tried to process the genetic data after it was read by a DNA sequencing machine.

The researchers warn that hackers could one day use faked blood or spit samples to gain access to university computers, steal information from police forensics labs, or infect genome files shared by scientists.

For now, DNA malware doesn’t pose much of a security risk. The researchers admit that to pull off their intrusion, they created the “best possible” chances of success by disabling security features and even adding a vulnerability to a little-used bioinformatics program. Their paper appears here.

“Their exploit is basically unrealistic,” says Yaniv Erlich, a geneticist and programmer who is chief scientific officer of MyHeritage.com, a genealogy website.

Previously, Kohno was among the first to show how to hack into an automobile through its diagnostic port, later also gaining access remotely by attacking cars though Bluetooth connections.

The new DNA malware will be presented next week at the Usenix Security Symposium in Vancouver. “We look at emerging technologies and ask if there are upcoming security threats that might manifest, so the idea is to get ahead,” says Peter Ney, a graduate student in Kohno’s Security and Privacy Research Lab.

To make the malware, the team translated a simple computer command into a short stretch of 176 DNA letters, denoted as A, G, C, and T. After ordering copies of the DNA from a vendor for $89, they fed the strands to a sequencing machine, which read off the gene letters, storing them as binary digits, 0s and 1s.

Erlich says the attack took advantage of a spill-over effect, when data that exceeds a storage buffer can be interpreted as a computer command. In this case, the command contacted a server controlled by Kohno’s team, from which they took control of a computer in their lab they were using to analyze the DNA file.

Companies that manufacture synthetic DNA strands and mail them to scientists are already on the alert for bioterrorists. In the future, the researchers suggest, they might also have to start checking DNA sequences for computer threats.

The University of Washington team also cautions that hackers could use more conventional means to target people’s genetic data, precisely because it is increasingly appearing online (see “10 Breakthrough Technologies 2015: Internet of DNA”) and even being accessed through app stores (see “10 Breakthrough Technologies 2016: DNA App Store”).

In some cases, scientific programs used to organize and interpret DNA data aren’t actively maintained, and that could create risks, says James Bonfield, a bioinformatics expert at the Sanger Institute, in the United Kingdom. Bonfield says he authored the program that the University of Washington researchers targeted in their attack. He says the short program, “fqzcomp,” was written as an experiment for a file compression competition and probably wasn’t ever employed.

Yoshimo
Oct 5, 2003

Fleet of foot, and all that!
Some NEW THING OR OTHER has been put up:

http://eclipsephase.com/qsr

(Quick Start Rules + Scenario, Second Edition.)

edit - holy poo poo that starting Firewall team of characters :hellyeah:

edit 2 - "We're sending you to investigate your own disappearance." I love this scenario so far.

Yoshimo fucked around with this message at 00:41 on Sep 2, 2017

ProfessorCirno
Feb 17, 2011

The strongest! The smartest!
The rightest!

Yoshimo posted:

edit 2 - "We're sending you to investigate your own disappearance." I love this scenario so far.

It's SUCH A GOOD HOOK!

Flavivirus
Dec 14, 2011

The next stage of evolution.
Man that new layout and art is pretty lovely too.

Gearhead
Feb 13, 2007
The Metroid of Humor

Axelgear posted:

None of this is to come down on the issue one way or the other. I think the Ultimates should stay for the same reasons the Jovians should: They are an understandable reaction to the horrors of the Fall and make the setting more interesting and colourful with the kinds of questions it asks. That said, the Ultimates are bigoted chauvinist bullies, who often enjoy looking down on those they consider inferior. That's the part that gets uncomfortably close to reality for some. You don't have to be a fascist to wear jackboots.

The heart of it is that the Jovans and Ultimates BOTH were a bit on the nose, without realizing it, about the setting as a whole. They both call too much attention to the contrarian position that Humanity hosed the Earth, hosed itself and is unraveling slowly while trying to pretend that Everything Is Fine In Our Sexy Six Dicked Cyber Baboon Future. The Jovans can be looked down on because they're living in tin cans and die from radiation. The Ultimates take the tools of the setting and go to war against the idea that Humanity is Just Fine.

What if, just what if, resleeving is suicide and you are just a replica of a person long dead?

What if the real you is sitting in a vat somewhere in a Titan facility on the rear end end of the galaxy, living through some horrible cybernetic punishment straight out of a Harlan Ellison fever dream?

How long before some anarchist accidentally creates a new Seed AI and this time it fucks everyone forever?

Gearhead fucked around with this message at 20:25 on Oct 2, 2017

xiw
Sep 25, 2011

i wake up at night
night action madness nightmares
maybe i am scum

Cpig Haiku contest 2020 winner
Yeah I love that kind of stuff about the game - like, because so many people in the setting has resleeved at some point, there's a huge social incentive to argue that resleeving is totally okay. If you don't think it's okay, you're living in a horror-world society and nobody around you WANTS to believe that, it's great.

Kwyndig
Sep 23, 2006

Heeeeeey


Well yeah, there's only an extremely small number of people who haven't resleeved at all in setting. So if you're arguing some consciousness thing everybody you're arguing with is already dead and you're dealing with their copies from your viewpoint, which is actually a real mental illness (capgras syndrome) if you believe that so good luck convincing people you're not crazy.

Gearhead
Feb 13, 2007
The Metroid of Humor

Kwyndig posted:

Well yeah, there's only an extremely small number of people who haven't resleeved at all in setting. So if you're arguing some consciousness thing everybody you're arguing with is already dead and you're dealing with their copies from your viewpoint, which is actually a real mental illness (capgras syndrome) if you believe that so good luck convincing people you're not crazy.

Unless you're from Jupiter, and 'everyone else' is the rest of the solar system that aren't the horrible, undying oligarchs running the PC behind the scenes.

In which case that's reality.

Hexenritter
May 20, 2001


Kwyndig posted:

capgras syndrome

:stonk: gently caress me that's terrifying

Gearhead
Feb 13, 2007
The Metroid of Humor

Hexenritter posted:

:stonk: gently caress me that's terrifying

Imagine that turned inwards, that's failing an Alienation roll right there.

Kwyndig
Sep 23, 2006

Heeeeeey


Gearhead posted:

Imagine that turned inwards, that's failing an Alienation roll right there.

Oh, you mean the Cotard Delusion (where you think you're dead). That would be a good result of a failed roll, yeah.

Gearhead
Feb 13, 2007
The Metroid of Humor

Kwyndig posted:

Oh, you mean the Cotard Delusion (where you think you're dead). That would be a good result of a failed roll, yeah.

The human mind is so wonderfully hosed sometimes. :haw:

Hexenritter
May 20, 2001


Yeah I've heard about the Cotard Delusion before, and that is proper hosed.

Excuse me while I stare at Kwyndig's avatar for ten minutes to make myself feel better.

Negative Entropy
Nov 30, 2009

RPPR have done an actual play of second Ed EP.

Helical Nightmares
Apr 30, 2009
Good article in Nature today about AI, brain-computer interfaces, individual privacy and identity. Useful fodder for Eclipse Phase adventures.

Four ethical priorities for neurotechnologies and AI

https://www.nature.com/news/four-et...mpaign=20171109

quote:

08 November 2017

Artificial intelligence and brain–computer interfaces must respect and preserve people's privacy, identity, agency and equality, say Rafael Yuste, Sara Goering and colleagues.

Consider the following scenario. A paralysed man participates in a clinical trial of a brain–computer interface (BCI). A computer connected to a chip in his brain is trained to interpret the neural activity resulting from his mental rehearsals of an action. The computer generates commands that move a robotic arm. One day, the man feels frustrated with the experimental team. Later, his robotic hand crushes a cup after taking it from one of the research assistants, and hurts the assistant. Apologizing for what he says must have been a malfunction of the device, he wonders whether his frustration with the team played a part.

This scenario is hypothetical. But it illustrates some of the challenges that society might be heading towards.

Current BCI technology is mainly focused on therapeutic outcomes, such as helping people with spinal-cord injuries. It already enables users to perform relatively simple motor tasks — moving a computer cursor or controlling a motorized wheelchair, for example. Moreover, researchers can already interpret a person's neural activity from functional magnetic resonance imaging scans at a rudimentary level1 — that the individual is thinking of a person, say, rather than a car.

It might take years or even decades until BCI and other neurotechnologies are part of our daily lives. But technological developments mean that we are on a path to a world in which it will be possible to decode people's mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals could communicate with others simply by thinking; and where powerful computational systems linked directly to people's brains aid their interactions with the world such that their mental and physical abilities are greatly enhanced.

Such advances could revolutionize the treatment of many conditions, from brain injury and paralysis to epilepsy and schizophrenia, and transform human experience for the better. But the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.

It is crucial to consider the possible ramifications now.

The Morningside Group comprises neuroscientists, neurotechnologists, clinicians, ethicists and machine-intelligence engineers. It includes representatives from Google and Kernel (a neurotechnology start-up in Los Angeles, California); from international brain projects; and from academic and research institutions in the United States, Canada, Europe, Israel, China, Japan and Australia. We gathered at a workshop sponsored by the US National Science Foundation at Columbia University, New York, in May 2017 to discuss the ethics of neurotechnologies and machine intelligence.

We believe that existing ethics guidelines are insufficient for this realm2. These include the Declaration of Helsinki, a statement of ethical principles first established in 1964 for medical research involving human subjects (go.nature.com/2z262ag); the Belmont Report, a 1979 statement crafted by the US National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research (go.nature.com/2hrezmb); and the Asilomar artificial intelligence (AI) statement of cautionary principles, published early this year and signed by business leaders and AI researchers, among others (go.nature.com/2ihnqac).

To begin to address this deficit, here we lay out recommendations relating to four areas of concern: privacy and consent; agency and identity; augmentation; and bias. Different nations and people of varying religions, ethnicities and socio-economic backgrounds will have differing needs and outlooks. As such, governments must create their own deliberative bodies to mediate open debate involving representatives from all sectors of society, and to determine how to translate these guidelines into policy, including specific laws and regulations.

Intelligent investments
Some of the world's wealthiest investors are betting on the interplay between neuroscience and AI. More than a dozen companies worldwide, including Kernel and Elon Musk's start-up firm Neuralink, which launched this year, are investing in the creation of devices that can both 'read' human brain activity and 'write' neural information into the brain. We estimate that current spending on neurotechnology by for-profit industry is already US$100 million per year, and growing fast.

Investment from other sectors is also considerable. Since 2013, more than $500 million in federal funds has gone towards the development of neurotechnology under the US BRAIN initiative alone.

Current capabilities are already impressive. A neuroscientist paralysed by amyotrophic lateral sclerosis (ALS; also known as Lou Gehrig's or motor neuron disease) has used a BCI to run his laboratory, write grant applications and send e-mails3. Meanwhile, researchers at Duke University in Durham, North Carolina, have shown that three monkeys with electrode implants can operate as a 'brain net' to move an avatar arm collaboratively4. These devices can work across thousands of kilometres if the signal is transmitted wirelessly by the Internet.

Soon such coarse devices, which can stimulate and read the activity of a few dozen neurons at most, will be surpassed. Earlier this year, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Neural Engineering System Design. It aims to win approval from the US Food and Drug Administration within 4 years for a wireless human brain device that can monitor brain activity using 1 million electrodes simultaneously and selectively stimulate up to 100,000 neurons.

Meanwhile, Google, IBM, Microsoft, Facebook, Apple and numerous start-ups are building ever-more-sophisticated artificial neural networks that can already outperform humans on tasks with well-defined inputs and outputs.

Last year, for example, researchers at the University of Washington in Seattle demonstrated that Google's FaceNet system could recognize one face from a million others. Another Google system with similar neural-network architecture far outperforms well-travelled humans at guessing where in the world a street scene has been photographed, demonstrating the generality of the technique. In August, Microsoft announced that, in certain metrics, its neural network for recognizing conversational speech has matched the abilities of even trained professionals, who have the option of repeatedly rewinding and listening to words used in context. And using electroencephalogram (EEG) data, researchers at the University of Freiburg in Germany showed in July how neural networks can be used to decode planning-related brain activity and so control robots5.

Future neural networks derived from a better understanding of how real ones work will almost certainly be much more powerful even than these examples. The artificial networks in current use have been inspired by models of brain circuits that are more than 50 years old, which are based on recording the activity of individual neurons in anaesthetized animals6. In today's neuroscience labs, researchers can monitor and manipulate the activity of thousands of neurons in awake, behaving animals, owing to advances in optical methods, computing, molecular engineering and microelectronics.

We are already intimately connected to our machines. Researchers at Google calculated this year that the average user touches their phone nearly one million times annually (unpublished data). The human brain controls auditory and visual systems to decipher sounds and images, and commands limbs to hold and manipulate our gadgets. Yet the convergence of developments in neurotechnologies and AI would offer something qualitatively different — the direct linking of people's brains to machine intelligence, and the bypassing of the normal sensorimotor functions of brains and bodies.

Four concerns
For neurotechnologies to take off in general consumer markets, the devices would have to be non-invasive, of minimal risk, and require much less expense to deploy than current neurosurgical procedures. Nonetheless, even now, companies that are developing devices must be held accountable for their products, and be guided by certain standards, best practices and ethical norms.

We highlight four areas of concern that call for immediate action. Although we raise these issues in the context of neurotechnology, they also apply to AI.

Privacy and consent. An extraordinary level of personal information can already be obtained from people's data trails. Researchers at the Massachusetts Institute of Technology in Cambridge, for example, discovered in 2015 that fine-grained analysis of people's motor behaviour, revealed through their keyboard typing patterns on personal devices, could enable earlier diagnosis of Parkinson's disease7. A 2017 study suggests that measures of mobility patterns, such as those obtained from people carrying smartphones during their normal daily activities, can be used to diagnose early signs of cognitive impairment resulting from Alzheimer's disease8.

Algorithms that are used to target advertising, calculate insurance premiums or match potential partners will be considerably more powerful if they draw on neural information — for instance, activity patterns from neurons associated with certain states of attention. And neural devices connected to the Internet open up the possibility of individuals or organizations (hackers, corporations or government agencies) tracking or even manipulating an individual's mental experience.

We believe that citizens should have the ability — and right — to keep their neural data private (see also 'Agency and identity'). We propose the following steps to ensure this.

For all neural data, the ability to opt out of sharing should be the default choice, and assiduously protected. People readily give up their privacy rights to commercial providers of services, such as Internet browsing, social media or entertainment, without fully understanding what they are surrendering. A default of opting out would mean that neural data are treated in the same way that organs or tissues are in most countries. Individuals would need to explicitly opt in to share neural data from any device. This would involve a safe and secure process, including a consent procedure that clearly specifies who will use the data, for what purposes and for how long.

Even with this approach, neural data from many willing sharers, combined with massive amounts of non-neural data — from Internet searches, fitness monitors and so on — could be used to draw 'good enough' conclusions about individuals who choose not to share. To limit this problem, we propose that the sale, commercial transfer and use of neural data be strictly regulated. Such regulations — which would also limit the possibility of people giving up their neural data or having neural activity written directly into their brains for financial reward — may be analogous to legislation that prohibits the sale of human organs, such as the 1984 US National Organ Transplant Act.

Another safeguard is to restrict the centralized processing of neural data. We advocate that computational techniques, such as differential privacy or 'federated learning', be deployed to protect user privacy (see 'Protecting privacy'). The use of other technologies specifically designed to protect people's data would help, too. Blockchain-based techniques, for instance, allow data to be tracked and audited, and 'smart contracts' can give transparent control over how data are used, without the need for a centralized authority. Lastly, open-data formats and open-source code would allow for greater transparency about what stays private and what is transmitted.

----------
Protecting privacy: Federated learning

When technology companies use machine learning to improve their software, they typically gather user information on their servers to analyse how a particular service is being used and then train new algorithms on the aggregated data. Researchers at Google are experimenting with an alternative method of artificial-intelligence training called federated learning. Here, the teaching process happens locally on each user's device without the data being centralized: the lessons aggregated from the data (for instance, the knowledge that the word 'weekly' can be used as an adjective and an adverb) are sent back to Google's servers, but the actual e-mails, texts and so on remain on the user's own phone. Other groups are exploring similar ideas. Thus, information systems with improved designs could be used to enhance users' ownership and privacy over their personal data, while still enabling valuable computations to be performed on those data.
----------

Agency and identity. Some people receiving deep-brain stimulation through electrodes implanted in their brains have reported feeling an altered sense of agency and identity. In a 2016 study, a man who had used a brain stimulator to treat his depression for seven years reported in a focus group9 that he began to wonder whether the way he was interacting with others — for example, saying something that, in retrospect, he thought was inappropriate — was due to the device, his depression or whether it reflected something deeper about himself. He said: “It blurs to the point where I'm not sure ... frankly, who I am.”

Neurotechnologies could clearly disrupt people's sense of identity and agency, and shake core assumptions about the nature of the self and personal responsibility — legal or moral.

People could end up behaving in ways that they struggle to claim as their own, if machine learning and brain-interfacing devices enable faster translation between an intention and an action, perhaps by using an 'auto-complete' or 'auto-correct' function. If people can control devices through their thoughts across great distances, or if several brains are wired to work collaboratively, our understanding of who we are and where we are acting will be disrupted.

As neurotechnologies develop and corporations, governments and others start striving to endow people with new capabilities, individual identity (our bodily and mental integrity) and agency (our ability to choose our actions) must be protected as basic human rights.

We recommend adding clauses protecting such rights ('neurorights') to international treaties, such as the 1948 Universal Declaration of Human Rights. However, this might not be enough — international declarations and laws are just agreements between states, and even the Universal Declaration is not legally binding. Thus, we advocate the creation of an international convention to define prohibited actions related to neurotechnology and machine intelligence, similar to the prohibitions listed in the 2010 International Convention for the Protection of All Persons from Enforced Disappearance. An associated United Nations working group could review the compliance of signatory states, and recommend sanctions when needed.

Such declarations must also protect people's rights to be educated about the possible cognitive and emotional effects of neurotechnologies. Currently, consent forms typically focus only on the physical risks of surgery, rather than the possible effects of a device on mood, personality or sense of self.

Augmentation. People frequently experience prejudice if their bodies or brains function differently from most10. The pressure to adopt enhancing neurotechnologies, such as those that allow people to radically expand their endurance or sensory or mental capacities, is likely to change societal norms, raise issues of equitable access and generate new forms of discrimination.

Moreover, it's easy to imagine an augmentation arms race. In recent years, we have heard staff at DARPA and the US Intelligence Advanced Research Projects Activity discuss plans to provide soldiers and analysts with enhanced mental abilities ('super-intelligent agents'). These would be used for combat settings and to better decipher data streams.

Any lines drawn will inevitably be blurry, given how hard it is to predict which technologies will have negative impacts on human life. But we urge that guidelines are established at both international and national levels to set limits on the augmenting neurotechnologies that can be implemented, and to define the contexts in which they can be used — as is happening for gene editing in humans.

Privacy and individuality are valued more highly in some cultures than in others. Therefore, regulatory decisions must be made within a culture-specific context, while respecting universal rights and global guidelines. Moreover, outright bans of certain technologies could simply push them underground, so efforts to establish specific laws and regulations must include organized forums that enable in-depth and open debate.

Such efforts should draw on the many precedents for building international consensus and incorporating public opinion into scientific decision-making at the national level11. For instance, after the First World War, a 1925 conference led to the development and ratification of the Geneva Protocol, a treaty banning the use of chemical and biological weapons. Similarly, after the Second World War, the UN Atomic Energy Commission was established to deal with the use of atomic energy for peaceful purposes and to control the spread of nuclear weapons.

In particular, we recommend that the use of neural technology for military purposes be stringently regulated. For obvious reasons, any moratorium should be global and sponsored by a UN-led commission. Although such commissions and similar efforts might not resolve all enhancement issues, they offer the best-available model for publicly acknowledging the need for restraint, and for wide input into the development and implementation of a technology.

Bias. When scientific or technological decisions are based on a narrow set of systemic, structural or social concepts and norms, the resulting technology can privilege certain groups and harm others. A 2015 study12 found that postings for jobs displayed to female users by Google's advertising algorithm pay less well than those displayed to men. Similarly, a ProPublica investigation revealed last year that algorithms used by US law-enforcement agencies wrongly predict that black defendants are more likely to reoffend than white defendants with a similar criminal record (go.nature.com/29aznyw). Such biases could become embedded in neural devices. Indeed, researchers who have examined these kinds of cases have shown that defining fairness in a mathematically rigorous manner is very difficult (go.nature.com/2ztfjt9).

Practical steps to counter bias within technologies are already being discussed in industry and academia. Such ongoing public discussions and debate are necessary to shape definitions of problematic biases and, more generally, of normality.

We advocate that countermeasures to combat bias become the norm for machine learning. We also recommend that probable user groups (especially those who are already marginalized) have input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development.

Responsible neuroengineering
Underlying many of these recommendations is a call for industry and academic researchers to take on the responsibilities that come with devising devices and systems capable of bringing such change. In doing so, they could draw on frameworks that have already been developed for responsible innovation.

In addition to the guidelines mentioned above, the UK Engineering and Physical Sciences Research Council, for instance, provides a framework to encourage innovators to “anticipate, reflect, engage and act” in ways that “promote ... opportunities for science and innovation that are socially desirable and undertaken in the public interest”. Among the various efforts to address this in AI, the IEEE Standards Association created a global ethics initiative in April 2016, with the aim of embedding ethics into the design of processes for all AI and autonomous systems.

History indicates that profit hunting will often trump social responsibility in the corporate world. And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren't prepared. We think that mindsets could be altered and the producers of devices better equipped by embedding an ethical code of conduct into industry and academia.

A first step towards this would be to expose engineers, other tech developers and academic-research trainees to ethics as part of their standard training on joining a company or laboratory. Employees could be taught to think more deeply about how to pursue advances and deploy strategies that are likely to contribute constructively to society, rather than to fracture it.

This type of approach would essentially follow that used in medicine. Medical students are taught about patient confidentiality, non-harm and their duties of beneficence and justice, and are required to take the Hippocratic Oath to adhere to the highest standards of the profession.

The possible clinical and societal benefits of neurotechnologies are vast. To reap them, we must guide their development in a way that respects, protects and enables what is best in humanity.

Helical Nightmares
Apr 30, 2009
If Jovians eschew psychosurgery, they just might turn to cybernetic brain implants to control their mood like this one being tested by DARPA in humans right now.

This also brings up questions of "what would a society look like if the government could monitor your mood in realtime with AI". All good fodder for speculative Eclipse Phase adventures.

AI-controlled brain implants for mood disorders tested in people

https://www.nature.com/news/ai-cont...mpaign=20171123

quote:

Researchers funded by the US military are developing appliances to record neural activity and automatically stimulate the brain to treat mental illness.

Brain implants that deliver electrical pulses tuned to a person’s feelings and behaviour are being tested in people for the first time. Two teams funded by the US military’s research arm, the Defense Advanced Research Projects Agency (DARPA), have begun preliminary trials of ‘closed-loop’ brain implants that use algorithms to detect patterns associated with mood disorders. These devices can shock the brain back to a healthy state without input from a physician.

The work, presented last week at the Society for Neuroscience (SfN) meeting in Washington DC, could eventually provide a way to treat severe mental illnesses that resist current therapies. It also raises thorny ethical concerns, not least because the technique could give researchers a degree of access to a person’s inner feelings in real time.

The general approach — using a brain implant to deliver electric pulses that alter neural activity — is known as deep-brain stimulation. It is used to treat movement disorders such as Parkinson’s disease, but has been less successful when tested against mood disorders. Early evidence suggested that constant stimulation of certain brain regions could ease chronic depression, but a major study involving 90 people with depression found no improvement after a year of treatment.1

The scientists behind the DARPA-funded projects say that their work might succeed where earlier attempts failed, because they have designed their brain implants specifically to treat mental illness — and to switch on only when needed. “We’ve learned a lot about the limitations of our current technology,” says Edward Chang, a neuroscientist at the University of California, San Francisco (UCSF), who is leading one of the projects.

DARPA is supporting Chang’s group and another at Massachusetts General Hospital (MGH) in Boston, with the eventual goal of treating soldiers and veterans who have depression and post-traumatic stress disorder. Each team hopes to create a system of implanted electrodes to track activity across the brain as they stimulate the organ.

The groups are developing their technologies in experiments with people with epilepsy who already have electrodes implanted in their brains to track their seizures. The researchers can use these electrodes to record what happens as they stimulate the brain intermittently ― rather than constantly, as with older implants.

Mood map
At the SfN meeting, electrical engineer Omid Sani of the University of Southern California in Los Angeles — who is working with Chang’s team — showed the first map of how mood is encoded in the brain over time. He and his colleagues worked with six people with epilepsy who had implanted electrodes, tracking their brain activity and moods in detail over the course of one to three weeks. By comparing the two types of information, the researchers could create an algorithm to ‘decode’ that person’s changing moods from their brain activity. Some broad patterns emerged, particularly in brain areas that have previously been associated with mood.

Chang and his team are ready to test their new single closed-loop system in a person as soon as they find an appropriate volunteer, Sani says. Chang adds that the group has already tested some closed-loop stimulation in people, but he declined to provide details because the work is preliminary.

The MGH team is taking a different approach. Rather than detecting a particular mood or mental illness, they want to map the brain activity associated with behaviours that are present in multiple disorders — such as difficulties with concentration and empathy. At the SfN meeting, they reported on tests of algorithms they developed to stimulate the brain when a person is distracted from a set task, such as matching images of numbers or identifying emotions on faces.

The researchers found that delivering electrical pulses to areas of the brain involved in decision-making and emotion significantly improved the performance of test participants. The team also mapped the brain activity that occurred when a person began failing or slowing at a set task because they were forgetful or distracted, and found they were able to reverse it with stimulation. They are now beginning to test algorithms that use specific patterns of brain activity as a trigger to automatically stimulate the brain.

Personalized treatment

Wayne Goodman, a psychiatrist at Baylor College of Medicine in Houston, Texas, hopes that closed-loop stimulation will prove a better long-term treatment for mood disorders than previous attempts at deep-brain stimulation — partly because the latest generation of algorithms is more personalized and based on physiological signals, rather than a doctor's judgement. “You have to do a lot of tuning to get it right,” says Goodman, who is about to launch a small trial of closed-loop stimulation to treat obsessive–compulsive disorder.

One challenge with stimulating areas of the brain associated with mood, he says, is the possibility of overcorrecting emotions to create extreme happiness that overwhelms all other feelings. Other ethical considerations arise from the fact that the algorithms used in closed-loop stimulation can tell the researchers about the person’s mood, beyond what may be visible from behaviour or facial expressions. While researchers won't be able to read people's minds, “we will have access to activity that encodes their feelings,” says Alik Widge, a neuroengineer and psychiatrist at Harvard University in Cambridge, Massachusetts, and engineering director of the MGH team. Like Chang and Goodman’s teams, Widge’s group is working with neuroethicists to address the complex ethical concerns surrounding its work.

Still, Chang says, the stimulation technologies that his team and others are developing are only a first step towards better treatment for mood disorders. He predicts that data from trials of brain implants could help researchers to develop non-invasive therapies for mental illnesses that stimulate the brain through the skull. “The exciting thing about these technologies,” he says, “is that for the first time we’re going to have a window on the brain where we know what’s happening in the brain when someone relapses.”

Gearhead
Feb 13, 2007
The Metroid of Humor
Once again, Kojima warned us.

MonsieurChoc
Oct 12, 2013

Every species can smell its own extinction.
After re-reading all of Battle Angel Alita last week, I wanna give Eclipse Phase another chance. I feel like, while the setting details differ, both are similar in their idea of a failed singularity.

Yoshimo
Oct 5, 2003

Fleet of foot, and all that!
At this point if I want to introduce EP to my gaming group, is it best to just wait for 2e or should I get cracking with the 1e books just now?

sexpig by night
Sep 8, 2011

by Azathoth

Yoshimo posted:

At this point if I want to introduce EP to my gaming group, is it best to just wait for 2e or should I get cracking with the 1e books just now?

I'd say wait for 2e

mdct
Sep 2, 2011

Tingle tingle kooloo limpah.
These are my magic words.

Don't steal them.
2e would be around five million times easier to introduce to a group than 1e would, because the edition shift from "character creation takes like 12 hours for your first character ever" to "character creation takes like 20 minutes" goes down way smoother. 2e cuts away enough of the jank so far from what I've seen of it (and I'm writing a full adventure for it, so I've seen quite a lot,) that it'll be a much more streamlined game while still keeping to being a relatively robust system.

sexpig by night
Sep 8, 2011

by Azathoth
Yea if they've played before I'd say gently caress it but with character creation going from a GURPS level slog to a fast flowchart like system might as well just wait rather than forcing them to go through that annoying system only to have a better one come...

soon? When IS 2e slated to drop, again, anyway?

Lord_Hambrose
Nov 21, 2008

*a foul hooting fills the air*



Do people just hate the package system from Transhuman or whatever? That was a good way to get a pretty believable character pretty quickly.

Always worked fine for me.

Negative Entropy
Nov 30, 2009

Lord_Hambrose posted:

Do people just hate the package system from Transhuman or whatever? That was a good way to get a pretty believable character pretty quickly.

Always worked fine for me.

Yeah I don't know what's with all the crying. I never had an issue getting players to make characters and the transhuman systems made it even easier.

I guess all the crunchy game lovers just never comment about it but the haters complain.
Squeaky wheel.

Negative Entropy
Nov 30, 2009

Btw I'm at a Japanese street food place and they have "Takko salad".

Kai Tave
Jul 2, 2012
Fallen Rib

Kommando posted:

Yeah I don't know what's with all the crying. I never had an issue getting players to make characters and the transhuman systems made it even easier.

I guess all the crunchy game lovers just never comment about it but the haters complain.
Squeaky wheel.

I like crunchy games but Eclipse Phase front-loads all the crunch into chargen, possibly one of the least interesting parts of a game, while the actual gameplay is just your bog-standard percentile system with six separate skills for Shoot Future Guns for some reason. It is absolutely unnecessarily tedious to make characters in Eclipse Phase, and the payoff in terms of actual play once the dice hit the table isn't worth the slog imo.

sebmojo
Oct 23, 2010


Legit Cyberpunk









yep. it's kind of a poop system so anything that makes it better is worth waiting for imo.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Is there any way to pre-order 2e at this point? If nothing else it would be good to get the schedule updates.

Adbot
ADBOT LOVES YOU

Goa Tse-tung
Feb 11, 2008

;3

Yams Fan
btw Altered Carbon is on Netflix and is obviously Eclipse Phase as gently caress

  • Locked thread