Jump to content

Microsoft lays off an ethical AI team as it doubles down on OpenAI

Andrei Chiffa

Summary

While investing into OpenAI to get an edge in the LLM-based chatbots space, Microsoft silently lays off its AI safety and ethics team. 

 

Quotes

Quote

 Microsoft laid off an entire team dedicated to guiding AI innovation that leads to ethical, responsible and sustainable outcomes. The cutting of the ethics and society team, as reported by Platformer, is part of a recent spate of layoffs that affected 10,000 employees across the company.

[...]

The ethics and society team wasn’t very large — only about seven people remained after a reorganization in October. Sources who spoke with Platformer said pressure from the chief technology officer Kevin Scott and CEO Satya Nadella was mounting to get the most recent OpenAI models, as well as next iterations, into customers hands as quickly as possible.

[...]

Teams like Microsoft’s ethics and society department often pull the reins on big tech organizations by pointing out potential societal consequences or legal ramifications. Microsoft perhaps didn’t want to hear “No,” anymore as it became hell bent on taking market share away from Google’s search engine. The company said every 1% of market share it could pry from Google would result in $2 billion in annual revenue.

 

My thoughts

Looks like Bing Chat gaslighting and telling Luke it loves him is not a bug but a feature and going forwards, more and more AI products, especially from Microsoft, will be behaving in "interesting" ways. 

 

Sources

https://techcrunch.com/2023/03/13/microsoft-lays-off-an-ethical-ai-team-as-it-doubles-down-on-openai/

https://www.independent.co.uk/tech/microsoft-layoff-ai-ethics-team-b2300237.html

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Andrei Chiffa said:

part of a recent spate of layoffs that affected 10,000 employees across the company.

don't you think this makes the title a bit misleading or at least missing important context?

4 minutes ago, Andrei Chiffa said:

Teams like Microsoft’s ethics and society department often pull the reins on big tech organizations by pointing out potential societal consequences or legal ramifications. Microsoft perhaps didn’t want to hear “No,” anymore as it became hell bent on taking market share away from Google’s search engine.

This sounds like pure speculation. Also given these were microsoft employees and not regulators they could just be ignored if microsoft didn't want to follow some of their advice.

 

Also honestly I can't help but think these "AI ethics" teams mainly exist as publicity stunts for corporations trying to avoid regulation... "AI" is such a broad term and field that the idea that a handful of people would be generally qualified to determine ethical development across all of them sounds absurd.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Andrei Chiffa said:

Summary

While investing into OpenAI to get an edge in the LLM-based chatbots space, Microsoft silently lays off its AI safety and ethics team. 

 

Yeah who cares about ethics and systematic impacts of something when there are short term finacial gains, and simple linear or design thinking to do. 

The people on this team will be fine.  The places for system wide thinkers in business only exist at best at the very top.  Hopefully they'll find a good job either in academia or become consultants in how to cope with the broken AI systems that will be pushed forward in the market. 

 

1 minute ago, Needfuldoer said:

What could possibly go wrong?

3fc1dd3f-896f-4328-8e4d-8a17bc3f3226_text.gif.0b9fff92cb47021ca1b74981d76681c8.gif

Link to comment
Share on other sites

Link to post
Share on other sites

Yo @Mark Kaine , it seems there's a chance Bing will remain tsundere (or yandere).
Just like how you like 'em. 🤣

There is approximately 99% chance I edited my post

Refresh before you reply

__________________________________________

ENGLISH IS NOT MY NATIVE LANGUAGE, NOT EVEN 2ND LANGUAGE. PLEASE FORGIVE ME FOR ANY CONFUSION AND/OR MISUNDERSTANDING THAT MAY HAPPEN BECAUSE OF IT.

Link to comment
Share on other sites

Link to post
Share on other sites

Do we know what did the ethics board even did? 

If it was like the Google one where they had a highly religious person who freaked out because he thought the program had a soul (after the religious person had constantly asked it leading questions) then I understand why they laid them off.

 

It's important to understand that just because they are laying off their ethics department doesn't mean they want it to become unethical, or that they won't have any oversight anymore. It's just that they aren't going to pay 7 people full time salaries to do that job, whatever it was. Maybe they realized it didn't actually contribute anything meaningful? Maybe the ethics board just said yes to everything anyway, or maybe it was full of technologically illiterate people who anthropomorphized a piece of software and asked for "robot-rights" or whatever?

 

I feel like we are missing a lot of important details about this story and people are just jumping to conclusions based on the headline.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LAwLz said:

Do we know what did the ethics board even did? 

If it was like the Google one where they had a highly religious person who freaked out because he thought the program had a soul (after the religious person had constantly asked it leading questions) then I understand why they laid them off.

 

It's important to understand that just because they are laying off their ethics department doesn't mean they want it to become unethical, or that they won't have any oversight anymore. It's just that they aren't going to pay 7 people full time salaries to do that job, whatever it was. Maybe they realized it didn't actually contribute anything meaningful? Maybe the ethics board just said yes to everything anyway, or maybe it was full of technologically illiterate people who anthropomorphized a piece of software and asked for "robot-rights" or whatever?

 

I feel like we are missing a lot of important details about this story and people are just jumping to conclusions based on the headline.

 

Yup, makes for a funny headline with all the AI stuff these days, but it was part of a larger bunch of layoffs, and who knows what plans, if any there may be to replace their function.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

Do we know what did the ethics board even did? 

If it was like the Google one where they had a highly religious person who freaked out because he thought the program had a soul (after the religious person had constantly asked it leading questions) then I understand why they laid them off.

 

It's important to understand that just because they are laying off their ethics department doesn't mean they want it to become unethical, or that they won't have any oversight anymore. It's just that they aren't going to pay 7 people full time salaries to do that job, whatever it was. Maybe they realized it didn't actually contribute anything meaningful? Maybe the ethics board just said yes to everything anyway, or maybe it was full of technologically illiterate people who anthropomorphized a piece of software and asked for "robot-rights" or whatever?

 

I feel like we are missing a lot of important details about this story and people are just jumping to conclusions based on the headline.

Yeah, but in that case they fired him because he released sensitive information not because he thought the program had a soul, I don't think microsoft would hire tech illiterate people and pay them a full salary to work a tech job. Do we have any evidence that they have a replacement? With these sort of things large companies should probably have some sort of duty to inform the public about what they are doing to manage risks from their technology, you may be correct but at the same time it does not seem like a good idea with what we currently know right now about this.

 

 I also feel like oversight of this probably requires more than 7 people anyway because of the uncertainties involved.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

It's important to understand that just because they are laying off their ethics department doesn't mean they want it to become unethical, or that they won't have any oversight anymore.

"It's important to understand that just because they are closing down their PR department doesn't mean they want to be unaccountable, or that they won't respond to press enquiries anymore." 🙃

"Nobody has any intention of building a wall."

 

I agree that laying off the ethics department is not a huge loss.An ethics council should be mandatory for billion dollar corporations and the council should be independent. What's the likeliness that an employee would make an unbiased ethical assessment of their employers projects?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Sauron said:

Also honestly I can't help but think these "AI ethics" teams mainly exist as publicity stunts for corporations trying to avoid regulation... "AI" is such a broad term and field that the idea that a handful of people would be generally qualified to determine ethical development across all of them sounds absurd.

I don't think it was necessarily for just PR reasons, but AI ethics department is also shorthand for "handcuff research because it might have an impact on a piece of society".

 

Actually this whole thing seems a bit click-baity, along with the job of AI ethicist.  Like it seriously seems some of the AI ethics create roadblocks to slow down the progress of research and product development.

 

15 minutes ago, HenrySalayne said:

I agree that laying off the ethics department is not a huge loss.An ethics council should be mandatory for billion dollar corporations and the council should be independent. What's the likeliness that an employee would make an unbiased ethical assessment of their employers projects?

So even if you go back to the Platformer article, their claim and it's only a claim as I don't trust the source is that the ethics department was one who wrote memos about essentially the "those poor artists" when Bing was proposing text to image.  So an ethics department that has a self-righteous attitude can have negative impacts on actual useful objects that are released.  It's the same type of argument that was made when the production line or other automated lines came in...the "think of the workers" argument.

 

Not saying that there isn't some good areas of AI ethics, but ultimately it's a business and if they want to do something they can.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, oali24 said:

Yeah, but in that case they fired him because he released sensitive information not because he thought the program had a soul, I don't think microsoft would hire tech illiterate people and pay them a full salary to work a tech job.

Why do you think the people working with ethics has technical experience? 

I looked at some of the people who had ethics at Microsoft on their LinkedIn and none of them had worked as for example a programmer before. 

Their previous jobs and educations were in things like history, HR and creative works like design. 

 

 

2 hours ago, oali24 said:

Do we have any evidence that they have a replacement? 

My argument is that they might not need to be replaced because they might not have been needed to begin with. 

 

 

2 hours ago, oali24 said:

With these sort of things large companies should probably have some sort of duty to inform the public about what they are doing to manage risks from their technology, you may be correct but at the same time it does not seem like a good idea with what we currently know right now about this.

They might still do that. Again, do we even know what these people did? I feel like people are jumping to conclusions after just reading the headline. 

 

 

2 hours ago, oali24 said:

I also feel like oversight of this probably requires more than 7 people anyway because of the uncertainties involved.

What do you want those people to do exactly? 7 full time employees means 240 hours a week. I just have a hard time thinking of what this department might have done and what the work might have resulted it. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, LAwLz said:

Why do you think the people working with ethics has technical experience? 

I looked at some of the people who had ethics at Microsoft on their LinkedIn and none of them had worked as for example a programmer before. 

Their previous jobs and educations were in things like history, HR and creative works like design.

My point is that I don't think microsoft would hire people who are not qualified.

7 minutes ago, LAwLz said:

What do you want those people to do exactly? 7 full time employees means 240 hours a week. I just have a hard time thinking of what this department might have done and what the work might have resulted it.

They could do research and advise programmers on how to manage ethical risks from their work, there are probably at leas 1000s of programmers working on AI right now at microsoft, I feel like 7 people is not that many people to advise all of them, again maybe they don't need that many people in this team but AI is clearly an important thing that will have a lot of implications going forward.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, LAwLz said:

 

My argument is that they might not need to be replaced because they might not have been needed to begin with. 

 

Based on past events with AI programmers seem to be really bad at thinking of all the implications of their work. 

Finding creative systems thinkers who also have technical knowledge would be a good way to find people who can think about the tech in a broader way.   Sometimes geeks and code monkey's get lost in the details of wiring a code and forget about how what they are doing might be used.   Experiments without theories to test are meaningless, and theories without experiments are just guesses. 

Here's an even more cynical take.  A lot of AI ethics experts are in academia.  Why pay 7 people around what $150K When you can offer a competitive grant for professors and graduate students to work on the problem?  A lot of R and D is sourced from colleges and just used by industry.  Basically outsourcing the work to people who publish it and give it a way rather than paying for it.   Yeah as I said earlier it is necessary work. 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Uttamattamakin said:

How do we make sure that the AI gives valid answers that do not forward misinformation just because it is popular on the internet,

Which ChatGPT is already doing. They must have trained it on some questionable data (probably forum posts 🤮).

 

1 hour ago, wanderingfool2 said:

[...] the ethics department [...] wrote memos about essentially the "those poor artists" when Bing was proposing text to image

Why do you bother to write so many sentences when that's all you need? It already contains all your subjective perception and opinion about the matter - total time saver.

 

How do you treat "art machines" if unlicensed IP was used to train them? I don't think the question has been answered. If you cover or sample a song, you have the same problems To my knowledge, courts haven't found a simply and consistent answer yet. Therefore I don't really understand your amusement about this topic.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, HenrySalayne said:

Which ChatGPT is already doing. They must have trained it on some questionable data (probably forum posts 🤮).

 

They may well be ok with it.  

 

Looking into the linkedins It seems one of their AI experts has a PhD in Experimental Psychology human factors.  You know the Wet Ware part of Technology... the people that use it.  (When they designed the 12VHPWR standard they should've asked at least one such person to try plugging it in.) 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, oali24 said:

My point is that I don't think microsoft would hire people who are not qualified.

That's a logical fallacy though.

Apparently they hired people who they now think aren't necessary, so going by what Microsoft thought they needed is not a good indicator.

 

For all we know, this might have been a situation where people with no understanding of the technology were judging it without understanding how it worked and as a result reached incorrect conclusions. I am not saying that's what happened, but without looking into this more I don't think we should rule out that as a real possibility.

 

 

41 minutes ago, oali24 said:

They could do research and advise programmers on how to manage ethical risks from their work, there are probably at leas 1000s of programmers working on AI right now at microsoft, I feel like 7 people is not that many people to advise all of them, again maybe they don't need that many people in this team but AI is clearly an important thing that will have a lot of implications going forward.

I think this logic stems from a misunderstanding of how development happens. Let's say Microsoft have 1000 programmers working on AI. They are not working on 1000 individual products all at once. Those 1000 employees might be working on 4 products, and those 4 products might not have new functional code more than maybe once a week that needs to be evaluated, and those evaluations might not be overly complex or time consuming to test either if you already have a solid foundation to test on (like different scenarios that have already been created as good testing methodology).

 

And I kind of doubt the ethics board was talking to developers anyway. My guess is that they were consulted before a project started, maybe a few times during development and then at the end of development before release, if this ever happened (might have been laid off before even finishing a complete project).

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, HenrySalayne said:

Why do you bother to write so many sentences when that's all you need? It already contains all your subjective perception and opinion about the matter - total time saver.

 

How do you treat "art machines" if unlicensed IP was used to train them? I don't think the question has been answered. If you cover or sample a song, you have the same problems To my knowledge, courts haven't found a simply and consistent answer yet. Therefore I don't really understand your amusement about this topic.

Your lack of understanding is exactly why I added in those sentences. 

 

The fact is simple, there are some departments like AI ethics which hold back technology simply because of the same self-righteous mentality like "think of the children" argument.  It creates a level of bureaucracy within the company where you need to satisfy those types of people, you need to spend resources and time countering their argument slowing progress or prevents a product from releasing...which is the example of the 'lets not release this product because it might affect the artists and devalue their work'.

 

The fact is AI ethics teams are the ones who setup some of the roadblocks.

 

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, HenrySalayne said:

How do you treat "art machines" if unlicensed IP was used to train them? I don't think the question has been answered. If you cover or sample a song, you have the same problems To my knowledge, courts haven't found a simply and consistent answer yet. Therefore I don't really understand your amusement about this topic.

The output of an AI image generator is no more related to its training dataset than your drawings would be to that Van Gogh you saw once.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

I think the people whining are over thinking this. MS doesn't need an AI ethics team if they are partnered with openAI, who would have their own AI ethics team they could consult. They got made redundant, not because of some grand conspiracy to use AI for evil reasons.

 

I swear the artificial intelligence that exists today can be more intelligent than real people...

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

tay ai always said she'd come back. never thought it would be in the form of bing but, at least 4chan will be happy. they'll get their perfect girlfriend back.

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, wanderingfool2 said:

The fact is simple, there are some departments like AI ethics which hold back technology simply because of the same self-righteous mentality like "think of the children" argument.  It creates a level of bureaucracy within the company where you need to satisfy those types of people, you need to spend resources and time countering their argument slowing progress or prevents a product from releasing...which is the example of the 'lets not release this product because it might affect the artists and devalue their work'.

 

The fact is AI ethics teams are the ones who setup some of the roadblocks.

 

"The fact is", "The fact is" only followed by your thoughts and opinion doesn't sound very factual. You only present imaginary scenarios ("how your mind thinks it could be") and what negatives these would have. You could go on for pages with this stuff.

 

Since you ignore the question "How do you treat "art machines" if unlicensed IP was used to train them?" it obviously has not been answered. And there is a simple answer: just license the IP used for training.

"But that would bring technological advancement to a grinding halt." - "No corporation should operate under such overwhelming restrictions." - "It is so unfair to consider ethics while working on new technology." - Please feel free to keep going with you thoughts why an ethics department is the worst thing in history, maybe ever...

 

47 minutes ago, Sauron said:

The output of an AI image generator is no more related to its training dataset than your drawings would be to that Van Gogh you saw once.

The copyyright for Van Gogh expired and he is long gone - he is not a good example.

 

We had several stories of people finding their art in the output of those image generators. The output of an art generator is more comparable to a composition and not a fundamentally new creation. If I draw something in the style of Van Gogh, I would not only do it poorly but I would also spend a considerable amount of time painting. An art generator could produce billions of pictures of a higher quality in the same time frame.

 

Since the machine can pretty much directly copy my (digital) art style and create an infinite amount of pictures with it, it devalues my own work. So I should be able to decide if my works can be use to train it and how much I want for it.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, LAwLz said:

That's a logical fallacy though.

Apparently they hired people who they now think aren't necessary, so going by what Microsoft thought they needed is not a good indicator.

Just because they were layed off does not mean they were not qualified according to microsoft, microsoft could just as easily have decided to lay them off because they did not want the job to be done, whether microsoft thinks that they should have people to do that job is not relevant.

 

27 minutes ago, LAwLz said:

think this logic stems from a misunderstanding of how development happens. Let's say Microsoft have 1000 programmers working on AI. They are not working on 1000 individual products all at once. Those 1000 employees might be working on 4 products, and those 4 products might not have new functional code more than maybe once a week that needs to be evaluated, and those evaluations might not be overly complex or time consuming to test either if you already have a solid foundation to test on (like different scenarios that have already been created as good testing methodology).

 

And I kind of doubt the ethics board was talking to developers anyway. My guess is that they were consulted before a project started, maybe a few times during development and then at the end of development before release, if this ever happened (might have been laid off before even finishing a complete project).

you can doubt that if you want but are you trying to tell me that there is no possible role in advising programmers on the design principles they should consider when developing software? I feel like even the programmers just having those ideas in mind when they develop software would have at least some effect on the end product.

 

2 hours ago, wanderingfool2 said:

I don't think it was necessarily for just PR reasons, but AI ethics department is also shorthand for "handcuff research because it might have an impact on a piece of society".

 

Actually this whole thing seems a bit click-baity, along with the job of AI ethicist.  Like it seriously seems some of the AI ethics create roadblocks to slow down the progress of research and product development

People like you think tha any sort of caution is bad, because for you anything that impedes the rate progress in any way is automatically bad and even dangerous to progress. What people like you don't consider is that those "roadblocks" may have a really good reason. By your logic scientists should have been allowed to immediately clone humans instead of sheep but because ethical considerations prevented that those ethical considerations must be bad and serve purely to stop research into human genetics.

3 hours ago, wanderingfool2 said:

So even if you go back to the Platformer article, their claim and it's only a claim as I don't trust the source is that the ethics department was one who wrote memos about essentially the "those poor artists" when Bing was proposing text to image.  So an ethics department that has a self-righteous attitude can have negative impacts on actual useful objects that are released.  It's the same type of argument that was made when the production line or other automated lines came in...the "think of the workers" argument.

 

Not saying that there isn't some good areas of AI ethics, but ultimately it's a business and if they want to do something they can.

I read the article and I think you are misrepresenting what the memo said, the memo was pointing out how ai image generators could reproduce any specific artists' exact work and how that could cause negative reaction to microsofts brand image, they were not only warning microsoft that the product could be damaging to microsoft itself but they were also advising them not to allow their useful product to be abused easily. Unless you are seriously going to argue that recreating copyrighted work in an ai image generator is good.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Poinkachu said:

Yo @Mark Kaine , it seems there's a chance Bing will remain tsundere (or yandere).
Just like how you like 'em. 🤣

But how? i have the bing chat thing on my phone,  its just like ChatGPT but stupid... !  Did i miss something? 

 

ChatGPT has at least the illusion of being smart 🤔

 

ps: and can i name her Cortana???

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Mark Kaine said:

But how? i have the bing chat thing on my phone,  its just like ChatGPT but stupid... !  Did i miss something? 

 

ChatGPT has at least the illusion of being smart 🤔

 

ps: and can i name her Cortana???

You might get hit with a cease & desist.
How about Croutona ?

There is approximately 99% chance I edited my post

Refresh before you reply

__________________________________________

ENGLISH IS NOT MY NATIVE LANGUAGE, NOT EVEN 2ND LANGUAGE. PLEASE FORGIVE ME FOR ANY CONFUSION AND/OR MISUNDERSTANDING THAT MAY HAPPEN BECAUSE OF IT.

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.

×