Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Internet Explorer
Jun 1, 2005





*subtitle shamelessly stolen from Jeoh

Howdy folks! We're going to give this thread a try. If it doesn't work, we'll get rid of it!

There's been a few times where the big IT threads have veered off into the topics of ethics in IT. Usually, those conversations are more off-topic than not, and not everyone wants to read a "political" discussion in every single thread on SA. So when they don't peter out naturally, us mods try to steer the conversation back on topic. But it's a pretty interesting topic! And it comes up often enough that I think people want a place to discuss it. So here we are!

What are the ethics of working for a certain employer, industry, or government? Or how about the ethics of working in IT in general? What are some of the ethical issues around infosec, or even around having the "keys to the castle" for a small company? What obligations, if any, do IT workers have in society at large, considering their outsized power and privilege? Where are IT workers getting taken advantage of, and what can we do about it?

Now, for some ground rules:
  • This is not D&D or CSPAM. This is SH/SC. SH/SC rules apply. Neither movax or I volunteered to moderate D&D or CSPAM, so this thread is going to get moderated how we would like to moderate it.
  • Put some effort into your posts. Be kind to one another. Post in good faith, and assume good faith in others. You can disagree, you can be firm, but acting like a jackass is going to get you probed. Low-content driveby posts are going to get you probed. Petty snipes are going to get you probed.
  • If this thread ends up being a pain in the rear end to moderate, or turns into a cesspool, it's going to get gassed.

Now, with all that being said, I'm looking forward to the discussion. I really want to hear what SH/SC has to say about this topic and I really think this could be a great thread. Who wants to start us off? What's been on your mind lately?

Internet Explorer fucked around with this message at 22:26 on Jun 26, 2021

Adbot
ADBOT LOVES YOU

22 Eargesplitten
Oct 10, 2010



There is no ethical computing under capitalism.

(USER WAS PUT ON PROBATION FOR THIS POST)

Thanks Ants
May 21, 2004

#essereFerrari


I try and follow the Bill and Ted laws of being excellent to one another, but there’s only so far you can go. I wouldn’t work directly for an oil company for example, and would try and avoid working on a project for an oil company, but if you’re going to rule out doing network consulting for a marketing company because they have Shell as a client then you’re going to find that there’s not really anything you can do - and we all need to earn money.

In general whether it’s intentional or not a lot of the emphasis on individual responsibly to avoid climate change by changing habits seems to be done in a way to deliberately distract from the impact of maybe 30 companies. Not to say individual choices can’t have an impact but it’s not the biggest problem.

droll
Jan 9, 2020

by Azathoth
I worked for a small (500~ employees, IT team comprising 10 actual workers and about 5 do-nothing-directors) company that was bought by a massive big pharma company (40,000 employees, god knows how many 'contract' workers). What shocked me was receiving an email from the CEO telling us to tell our congresspeople to vote against a bill that was meant to make drugs more affordable for people. "It will stifle innovation". I'd never seen the quiet part said out loud like that before. I will probably never work in such a company ever again, but holy poo poo was it amazing having budget to buy whatever the gently caress I needed.

I think the shitpost above is right though, nothing is totally ethical under our system. Even local socialist organizing seems to rely on big tech. It's just varying degrees of awfulness, with oil and pharma near the top.

droll fucked around with this message at 00:15 on Jun 27, 2021

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

droll posted:

I worked for a small (500~ employees, IT team comprising 10 actual workers and about 5 do-nothing-directors) company that was bought by a massive big pharma company (40,000 employees, god knows how many 'contract' workers). What shocked me was receiving an email from the CEO telling us to tell our congresspeople to vote against a bill that was meant to make drugs more affordable for people. "It will stifle innovation". I'd never seen the quiet part said out loud like that before. I will probably never work in such a company ever again, but holy poo poo was it amazing having budget to buy whatever the gently caress I needed.

Depending on the state that is technically illegal. Had a client do this as well and they caught fines. It usually falls under lobbying, and when your employer does it even if they don't say so, it implies coercion.

CommieGIR fucked around with this message at 01:10 on Jun 27, 2021

Cithen
Mar 6, 2002


Pillbug
I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting.

I have passing awareness of the issues that have been more in the public eye over recent years, especially around racism, sexism, and general bias that emerges in AI, but I imagine that's just the tip of the iceberg. While an important topic in its own right, there has to be a much more robust and wide-ranging discussion happening somewhere about ethics in AI that goes beyond issues of bias.

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
It's about ethics in granting yourself access to the HR mailbox.

ask my previous co-worker

Thanks Ants
May 21, 2004

#essereFerrari


Cithen posted:

I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting.

I have passing awareness of the issues that have been more in the public eye over recent years, especially around racism, sexism, and general bias that emerges in AI, but I imagine that's just the tip of the iceberg. While an important topic in its own right, there has to be a much more robust and wide-ranging discussion happening somewhere about ethics in AI that goes beyond issues of bias.

It's not totally focussed on the AI topic, but this is a very accessible book I read on the issues of designing in bias

https://www.ruinedby.design/

Watch a couple of Mike's recorded presentations and if you enjoy them the books are worth a read.

Internet Explorer
Jun 1, 2005





Cithen posted:

I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting.

I have passing awareness of the issues that have been more in the public eye over recent years, especially around racism, sexism, and general bias that emerges in AI, but I imagine that's just the tip of the iceberg. While an important topic in its own right, there has to be a much more robust and wide-ranging discussion happening somewhere about ethics in AI that goes beyond issues of bias.

Would be happy to see this discussed here!

Chuck_D
Aug 25, 2003
I can say that for as bad a rap as local government gets in general, the IT shops are generally (generally) extremely ethical. As non-competitive entities, lots of municipalities share services or at least strategies around solving certain problems, and, as a rule, most take their role of being stewards of public dollars very seriously.

I'm an IT Director for a municipality and have been in the field for 14+ years, starting at helpdesk and moving up the ranks to executive leadership. Integrity and ethics are two ideals I live by and expect from my teams.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Worth noting that AI is both a very broad term for a bunch of technologies and a buzzword, the big issue is we don't actually have AI, but trained relational stacks. A lot of the biases are from what the AI is trained on.

Impotence
Nov 8, 2010
Lipstick Apathy
Don't facial recognition and other "anti-theft/anti-petty-crime" things also tend to be much more heavily deployed in lower socioeconomic or minority areas, which ends up adding fines and jail time and the ability to lock someone out from their only few places to shop at semipermanently?

I know in the mid-6-digit median income areas here, nobody wants facial recognition and there aren't even obvious cameras at self checkouts ("because there's no theft"). But go to Walmart an hour away and there are mobile police towers with 360 degree PTZ IP cameras that automatically zoom into you when you look at it, automatic licence plate scanning on entry and exit, each self checkout is recording you from multiple angles, steal food once because you need to and you're almost certainly guaranteed to get caught with HD footage, while in areas without all this automation and parking lots with robots roaming it'd just be written off as shrinkage

angry armadillo
Jul 26, 2010
I work for a private company that operate a number of prisons (amongst a bajillion other things - we are a giant global company, but i just work in the prison bit.)

I get the impression private prisons are a plague upon the US but I do not work there and I sleep fairly easily witnessing the way my company operates from an ethical perspective so that is good.

What I do have to highlight is the very strict rules about how we can and cannot make money -for example if the prisoners spend money on commissary we have to use the profit from that money to invest in making the jail a better place for prisoners benefit - there is also a government dude on site who signs off on such projects to keep it all transparent.

From an IT perspective
One really bad thing about my particular company is we had a head of infosec about 10 years ago who created such a fear of risk across the company we are behind our competitors to this day.

For example, you were never allowed wifi or cell phones in prisons for fairly obvious reasons, since the pandemic this has relaxed - the government gave us data enabled ipads to allow prisoners to virtually attend funerals via zoom.
In fairness, the haste at which that was deployed is probably the opposite end of the security spectrum as it was a a response to covid

The issue our company has is there is now guidance about deploying wifi securely that we can follow but we are still trying to find a way to issue people with PDAs that dock at the end of shift to upload their work instead of bothering with wifi - this is just because our attitude to any risk is to not do something, rather than mitigate appropriately

Cyber Punk 90210
Jan 7, 2004

The War Has Changed
I worked for Time Warner Cable for a few years. They pulled a guy from high up in IT to code the new database (I don't know the particulars) but they never gave him the title or pay raise. After 6 months he asked about it and they pretty much said "Once the database is set up you can go back to being on IT" as if it was good news. He quit a couple weeks later, just before Thanksgiving.

Turns out he wrote a little killswitch in to the system and it locked itself on Thanksgiving day requiring managers to come in. They said the code was bad and they were able to get around it easy, but he put pictures of the text messages up on Facebook first of them raging at him and telling him he's going to jail and then, later, texts of the manager begging for the key.

Please remember that Time Warner Cable (Now Spectrum) is a terrible company that loves wage theft

Thanks Ants
May 21, 2004

#essereFerrari


I think legally he would have been hosed if they'd wanted to pursue it, though IANAL.

The lesson is that being asked to do something above your normal duties is the start of a pay negotiation.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Thanks Ants posted:

I think legally he would have been hosed if they'd wanted to pursue it, though IANAL.

The lesson is that being asked to do something above your normal duties is the start of a pay negotiation.

This is the correct answer. Unfortunately, while I feel for him, setting up kill switches and stuff like that is just an easy way to go to jail or have fines levied against you, and the company WILL win.

I've had to do a few cases like that for clients, and while some of them were major dicks, most of them were being taken advantage of and it really did bother me helping build cases and evidence against them.

Impotence
Nov 8, 2010
Lipstick Apathy
yeah its much easier to just write poo poo code and be bad at code and then either it breaks and you fix it and are the hero or you get fired and it breaks totally legal version!


certainly not upset about cleaning up after a "30 years of experience" guy that learnt how to program in c, didn't want to understand integer overflow or underflow and instead just used strings for everything, then doing parseInt() or parseLong() as needed

Cyber Punk 90210
Jan 7, 2004

The War Has Changed
I'm not saying the guy was smart or wouldn't go to jail. It was funny to watch the manager who days before was texting begging hands emojis is boasting that she is so much smarter than this dumb coder

Biowarfare posted:

yeah its much easier to just write poo poo code and be bad at code

I'm already halfway there :smug:

RFC2324
Jun 7, 2012

http 418

Cyber Punk 90210 posted:

Please remember that Time Warner Cable (Now Spectrum) is a terrible company that loves wage theft

(Now Lumen lol)

They have entered the "change our name every 6 months" cycle, and some friends of mine who are still with them like to gripe about the constantly changing signatures that they have to use

I just don't use a signature at all, and assume people can read my email address

Butter Activities
May 4, 2018
Probation
Can't post for 2 hours!
Now that CEH is toast can we become Certified Un-ethical Hackers?

jiggerypokery
Feb 1, 2012

...But I could hardly wait six months with a red hot jape like that under me belt.

How the gently caress can ML based programs not consider the data they were trained on not part of the program?

Specifically I am thinking of github copilot. Its an ML tool that generates code by guessing the next line a developer wants to write.

Thing is it has been trained on billions of lines if open source code, much of which is under licenses like GPL which prohibit reproduction.

It reproduces open source code verbatim regardless of the licence of the code it "learnt" from.

https://mobile.twitter.com/mitsuhiko/status/1410886329924194309

Jowj
Dec 25, 2010

My favourite player and idol. His battles with his wrists mirror my own battles with the constant disgust I feel towards my zerg bugs.

SMEGMA_MAIL posted:

Now that CEH is toast can we become Certified Un-ethical Hackers?

wait did something happen to CEH?

The Gadfly
Sep 23, 2012

Cithen posted:

I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting.

I have passing awareness of the issues that have been more in the public eye over recent years, especially around racism, sexism, and general bias that emerges in AI, but I imagine that's just the tip of the iceberg. While an important topic in its own right, there has to be a much more robust and wide-ranging discussion happening somewhere about ethics in AI that goes beyond issues of bias.

As someone who has coded neural networks and other types of machine learning algorithms from scratch (using feedforward, backprop, etc...), I can tell you that they have no inherent bias when identifying features in data. In neural networks, each node in the network initially starts with the same weight, which is updated and propagated throughout the neural network with every other training data point.

Granted, machine learning is only as good as the data. If you feed the neural network garbage data in, then you'll receive garbage data out. This includes the omittion of valid data. Trust me, the bias coming out of "AI" systems is much more likely to reflect human biases of researchers who try to make the algorithm output their own biased expected results, rather than a network gradient descent machine learning algorithm getting the answer flat out wrong. Machines are not biased, but the people (and thus the data fed to the machine) can and often are.

The most important way of combating this bias is to ensure that the data being fed into the machine is factual, correct, and complete.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Jowj posted:

wait did something happen to CEH?

Yup, EC-Council plagiarized a bunch of blogs articles on security. This is not their first issue as well, they had some major misogyny issues a year or so ago

RFC2324
Jun 7, 2012

http 418

CommieGIR posted:

Yup, EC-Council plagiarized a bunch of blogs articles on security. This is not their first issue as well, they had some major misogyny issues a year or so ago

that seems rather unethical

Jowj
Dec 25, 2010

My favourite player and idol. His battles with his wrists mirror my own battles with the constant disgust I feel towards my zerg bugs.

CommieGIR posted:

Yup, EC-Council plagiarized a bunch of blogs articles on security. This is not their first issue as well, they had some major misogyny issues a year or so ago

cueh,

apseudonym
Feb 25, 2011

The Gadfly posted:

As someone who has coded neural networks and other types of machine learning algorithms from scratch (using feedforward, backprop, etc...), I can tell you that they have no inherent bias when identifying features in data. In neural networks, each node in the network initially starts with the same weight, which is updated and propagated throughout the neural network with every other training data point.

Granted, machine learning is only as good as the data. If you feed the neural network garbage data in, then you'll receive garbage data out. This includes the omittion of valid data. Trust me, the bias coming out of "AI" systems is much more likely to reflect human biases of researchers who try to make the algorithm output their own biased expected results, rather than a network gradient descent machine learning algorithm getting the answer flat out wrong. Machines are not biased, but the people (and thus the data fed to the machine) can and often are.

The most important way of combating this bias is to ensure that the data being fed into the machine is factual, correct, and complete.

Its more than data -- good data alone doesn't mean your ML algorithm won't produce a model that is biased, if we could even define or get perfectly good data from the real world in the first place. Consider how everyone in ML likes to measure a models efficacy -- false positives and false negative rates. Those metrics in a vacuum (and they're always used in a vacuum) do nothing to prevent bias and even encourage it, even if you have a magically representative dataset those metrics will absolutely allow an algorithm to decide that being wrong 100% of the time about a minority that makes up .5% of the population rather than 2% distributed through the other 99.5% is the far better model.

Minimizing the mean error function blindly is itself a bias and the ML field is absolutely abysmal at understand the shape of the error space and what that implies for their application.

Blaming the researchers is oversimplification and won't solve the problems with fairness AI has. We take error functions and algorithms that are as naive as a small child, slam a bunch of data taken from the real world into them, and are somehow surprised when they say hosed up things.

apseudonym fucked around with this message at 21:11 on Jul 4, 2021

RFC2324
Jun 7, 2012

http 418

Whats the solution to that tho? Provide a dataset that doesn't reflect the real world?

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...
Don't pretend that ML is magic fairy dust your can sprinkle over important decisions and then use to excuse yourself when it inevitably goes wrong.

RFC2324
Jun 7, 2012

http 418

oh, I thought there was some solution people were looking at beyond the "having common sense" that basically everyone lacks

apseudonym
Feb 25, 2011

RFC2324 posted:

Whats the solution to that tho? Provide a dataset that doesn't reflect the real world?


Volmarias posted:

Don't pretend that ML is magic fairy dust your can sprinkle over important decisions and then use to excuse yourself when it inevitably goes wrong.

In short, this. But from the theory side there has been more interest lately of slightly less naive looks at error but I'm a bit behind on papers so :shrug:.

At a minimum you should understand the failures of your model and the especially bad ones and actively test for them. Since models from modern algorithms are borderline un-understandable except through evaluation explicitly testing for biases is required. The first thing I ask anyone who is doing ML is "tell me about the cases your model gets wrong", if people don't have any idea you should expect some surprising failures to show up in the worst possible way.

E: Point being, "Its just bad data or bad researchers" understates just how deep the problem goes.

The Gadfly
Sep 23, 2012
The point is that machines do not have a particular bias. Node weights are technically biases, however they are the result of strictly data for the purpose of finding optimal features. In machines, a bias concretely does not exist at all before the data set starts being evaluated. People, on the other hand, are almost always biased before, during, and after the data is evaluated.

Adbot
ADBOT LOVES YOU

apseudonym
Feb 25, 2011

The Gadfly posted:

The point is that machines do not have a particular bias. Node weights are technically biases, however they are the result of strictly data for the purpose of finding optimal features. In machines, a bias concretely does not exist at all before the data set starts being evaluated. People, on the other hand, are almost always biased before, during, and after the data is evaluated.

Your evaluation function immediately introduces a bias to minimize the mean error over all else that is absolutely a bias and one that causes problems in any real world system


Even if the implementer was perfectly ethical that would not mean the result of their ML project is. Its easy to say "oh these ML fairness issues are the results of bad people but I wouldn't do that" or "it wouldn't happen if we were all good people" but its not true or productive in fixing things. Its not some deep evil conspiracy that resulted in the never ending stories of ML bias or ethics in IT but naivete both in the tools we use and the way we approach our problems and put blind faith in tools.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply