Jump to content

GabD

Member
  • Posts

    13
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. If you have an official diagnosis to back it up, yes. This also means going to a certified psychologist or psychiatrist and getting a paper justifying that you are ineligible for employment and entrepreneurship due to having a gaming disorder, and that this prevents you from maintaining any sort of income. IDK how it works in the USA past that point, but usually you are then required to get help for that disorder in order to become functional, which means going to therapy, where you will most likely be exposed as a fraud. And if you don't go to therapy, you may become ineligible for social assistance. The whole ordeal would take more time and effort than just getting a job, since you are obviously a fraud and able to earn an income. You'll even make more money for your effort that way.
  2. GabD

    Off Topic Chit Chat

    Happy moving day! https://www.theguardian.com/cities/2018/jun/29/montreal-moving-day-what-happens-when-a-whole-city-moves-house-at-once
  3. Alright, I’ll keep this rebuttal short : - I agree with the research you posted; In fact I have read it a while ago. - It indeed proves that: 1) biases are a thing, 2) it is overwhelmingly common to be unreasonable due to bias, and 3) biases can be detected, observed, described and understood. - The research details the observation that “convincing” happens relatively rarely in everyday life under statistically-normal circumstances because the vast majority of humans are rarely rational in their everyday life under statistically-normal circumstances. - The fact that most people usually aren't rational in their moral judgements doesn't make it impossible. That's a naturalistic fallacy, you can’t base your argument on that. - But the fact that "“convincing” rarely happens" actually proves my point, in disfavor of yours: it necessarily implies that it does in fact happen sometimes, and thus that it is indeed possible and can be done. - And if it’s both possible and has been described, it can be understood and can theorically also be replicated. Ergo, we could know and employ methods to increase the likelihood for people to be convinced by facts and reason. Your own sources back my argument more than yours.
  4. You seem to assume that this is an exceptional case of disagreement, when it isn't. As per your own argument, you argue it is absolutely impossible, for all time and all contexts, to satisfy all involved parties (and with no regard for the arguments of any parties). However, to be coherent, this would mean that we would have the right to deny the use of anything in existence on the basis of disagreement, which we don't, and for good reason. Also, human actions certainly don't satisfy all involved parties, yet you seem to have no problem with them being sub-optimal. Humans already decide between people wether they know it or not. And they do exert decisions with potentially or necessarily lethal consequences. the only reason why humans aren't held to the same standard that you wish to hold machines at is that they can't process information as quickly or efficiently as machines can right now. Human error, even when the consequences are generally worse, is seen as less of a problem than the consequences of non-human mechanisms. As if machines were more ethically responsible than conscious informed humans, or as if they could be held more accountable. The failure rate on human perception and interpretation is a LOT worse than that of even the most cheap-ass sensors on the market. We have fewer solutions for when humans are the defective element. No, because if people can't be convinced, then it is impossible for people to accept reality unless that was the initial position they had from the start. Otherwise, that would imply being convinced of it, which you argue is impossible. This is why I said that this notion is pointless. By your own argument, it ifollows that it is impossible to "accept" reality, it is only possible to be lucky enough to happen to have an initial belief that happens to be representative of reality at some point. And you better hope that reality doesn't change too much, cause then you could end up being wrong, since even the changes in reality couldn't convince you that is has changed, because if people cannot be convinced, they cannot ever change beliefs. And if they do change beliefs, that implies a cause for that change, which leaves open the possibility that factors outside of our own mindset can affect our mindset, i.e. that people can be convinced, which you argue is impossible. I hope you begin to see why I said that this is an untenable position. As for my sentence being self-defeating, it only would be so if it your assumption that people cannot be convinced was true. Yet you haven't provided proof for it, only asserting it as if it was self-evident. While there is ample proof for the possibility for one's beliefs to change. Linus changed his beliefs about things in at least a couple of his videos. And individuals have definitely changed their beliefs about cigarettes during their life once it was proven beyond reasonable doubt that they were actually detrimental to health instead of beneficial as previously believed. The ethical question "Should people smoke cigarettes?" has largely been resolved, now hardly anybody would answer "yes, you should". The only positive answer that can pass as reasonable is with the conditional "if you really want to and are aware of the consequences". Again, asserting it as fact is not empirical proof. You claim is one that requires empirical proof. I actually am also trying to find the proof for you, but have yet to see anything that would invalidate the proof that individuals have indeed changed their minds in the past, and still do right now, which hints that they can. I mean, things would be much simpler if you were right. The implication of relativism towards all prescriptive judgements with an ethical connotation would mean that there would be no further need to invest time and effort in science, philosophy or any other academic discipline, since it wouldn't be in any way more justified than simply not doing it. The thing is that ethics applies to a lot more than you seem to think. Scientific research is just one example of things that are justified by the ethical judgement of its value being worth more than the absence of it. I'm not opposed to what you say because I want to be, I am because I think your assumptions are wrong, which leads you to an incorrect conclusion, which in turn has ramifications that I don't think you would accept either. You'll eventually have to accept that there is a difference between "opinion" and "argument". You can dismiss someone else's ethical position (not their nature as an ethical agent) on the basis of their arguments, but not their opinion. This is what I mean by "opinions don't matter" or "are irrelevant". They're no basis for judgement, only their justification is. People can be of the same opinion with some having better or worse justification for the same conclusion. In effect, the positions with worse argumentation will be dismissable in favor of those with better argumentation, even though they argue for the same conclusion. I agree that it has no agreed-upon resolution yet, not that it would be impossible to agree upon an optimal solution. There's a difference between "so far" and "for all time". The present lack of an agreed-upon solution does not make it impossible. Arguments are being made and considered. We'll see if they can be optimized, by trying to optimize them and perhaps succeeding. That takes time and effort, which the NSF recognizes and supports. Specialists, more competent people than you or I, have tackled this subject and found it worthy of further research. You argue against that judgement, not me. No, because we don't know that it has no resolution, that statement presently does not qualify for the status of knowledge. And no to the sedond part, because I want the alternative a lot less less than my not wanting this. Not having self-driving cars currently affects a much larger portion of the population fatally, compared to what the adoption of self-driving cars would. People dying is not desireable and currently happens a lot, without much possibility or will for improvement on the human part. This is a statistical improvement compated to the alternatives. This is the less-wrong option. "The problem" that you mentioned and that that my sentence was specifically referring to was that of adopting a specific ethical algorithm on a large scale (rather than just at the level of communities). The solution you propose ends up being the same as not adressing the problem in the first place. i.e., not recognizing it as a problem. I mean, you can argue wether or not it really is a problem, but you can't propose inaction as a solution to something you'd recognize as a problem. If you'd argue that there's nothing we could do about it (that there is no solution), as you seem to tend to do, then you shouldn't talk about it as a problem, but as an unfortunate inevitability. I guess the fact that I live in a part of the world where these specific issues are all much less prevalent than they can be in other parts of the world sort of prevents me from seeing the point you're trying to make. Still, contextual factors determine people, and arguments brought up in discussion are among the contextual factors (since people are frequently exposed to them by hearing and reading). Many things are innate, but opinions and arguments aren't. In the last five years, a large proportion of canadians have been convinced to support legalizing weed, mostly by providing sound arguments for it. There are other cases around the world that apply for every one of your other examples. By all means, please do. I honestly want to see if people have managed to provide a better justification for moral relativism. I personally can't fathom a valid argument for it, and everyone out of the 7 Ethics professors that I personally know starts the first class of the semester with a thourough debunking of this in order to prevent students from writing nonsense in their term papers. If it turns out that it can be justified in a way that is valid, people ought to learn about it, as it would utterly revolutionize the discipline by invalidating it completely. Nor did I claim to. Hence why I refer to what experts generally say about it. That's arguably more what sociologists do. And Ethics doesn't have to be ideal, only optimal. It is also usually about things that are very relevant to "the masses". An entire field of current academic research exists on the basis of these three statements being completely wrong. Now, as an undergrad student, I don't claim to be the best person to provide those arguments, I merely tried, and I think I did a decent job here. This field of research has argued for its continued existence and relevance, and successfully so. But you somehow think that you know the worth of a discipline better than the staggering majority of the specialists who actually have the qualifications to make that judgement. That you know it better than even the National Science Foundation seems highly dubitable to me. Maybe you should have some doubts about your qualifications in this matter, as I do about mine. If invalidating Ethics as a worthwhile discipline was so easy, it would likely have been done a very long time ago. Only in the extremely small chance that they, or people they know about, find themselves in an "AI-must-decide-who-dies" situation. They're still actually much more likely to get run over and killed by a human driver, or to crash their own car and kill themselves, even after self-driving cars have become commonplace. You are also able to find examples of individuals changing their mind during their own lifetime. It does not take centuries to convince people. Convincing has necessary and sufficient conditions. They vary, of course, but there is a finite number of them. I'm not saying that I know how to convince anybody quickly. If I did, I'd be president of the world by now. I just know that convicing happens and is possible. I don't think so, but it's possible I have failed. Though, I hope you realize that the worth and effectiveness of Ethics as a discipline does not rest on my perhaps-incapable shoulders alone. Maybe you might want to read up what actual ethicists have to say about it, and about their methodology. Me, I'm just an undergrad student, and also not the best one at that. My proposition for the mechanisms you asked for is the same as that of any academic discipline: the method of dialogue and argumentation to defend a position in a convincing manner allows the competition and elimination of arguments. This can then be a method for finding solutions that make optimal sense for real problems. This is how academia in general works. If you invalidate this, you invalidate academia in general, including Philosophy, Arts, Technics and Sciences. And I can guarantee you than changing one's mind does happen in academia too. Some people even change their minds out of their own research refuting their own previous conclusions. It's not all like that, but it happens. Of course I'd be devastated by a trumatic event like that. Just as much as I'd be devastated if my mother was hit by a human driver whose brain made a move to run her over instead of someone else. Immediate reactions are decisions that our nervous systems make depending on the information that they can process. In both cases, not all possible factors are taken into account, even AI isn't gonna be omniscient in the foreseeable future. By being emotionally devastated by the death of my mother is arguably not very relevant. Maybe that delinquent kid has more people who care about him than my mother does. And maybe he would hav ended up being a better person in the future. I wouldn't know, and neither would the foreseeable AI. I'd ALSO be pretty fucking sad if a delinquent kid died so my mother could survive. It's still a fucking tragedy. Being emotionally stunted would be "having a total lack of empathy for the life of strangers". I don't believe that the people I know and care about are inherently worth more than strangers. I feel for strangers too. I also honestly think that deciding solely based on age is a very sub-optimal criteria for this. Obviously factors such as "chances of surviving being run over" are going to be more important factors than "how many trips around the sun that person has made". It's a bit disingenuous to directly jump to the most bait-y criterias, which the wording of the article itself doesn't help with. What I argue is that not all arguments are equal. If an opinion is backed by a valid argument, that argument can be scrutinized. If an opinion is not backed by a valid argument, then it's still an opinion, but it's only an opinion, and not worth spending the time to consider it with rigor. I argue that opinions regarding ethics are irrelevant, and that only the arguments matter for serious consideration. Other things that you might think should matter can be made to matter when framed in the form of a valid argument. I'm not talking about what people actually take as mattering in their day-to-day life. People (myself and you included) generally don't constantly practice rigorous ethical enquiry in their day-to-day lives. That doesn't mean people can't do it when we do put our minds to it. And when people do that, it has the potential to go beyond mere opinion, when valid arguments are provided. I haven't judged your opinions to be less important than mine. I have provided arguments which I think are more sound and convincing than yours. Let me be clear: your opinion is backed by arguments that I think could be valid, but in which the factual premises are empirically wrong; and in core parts make your arguments self-defeating. You haven't provided any empirical proof backing the fundamental fact-statement that people never get convinced (and thus somehow can't be). It only takes one case of convincing to refute your argument. I have provided such a case, and also argued that convincing is possible. And if it is indeed possible, then the rest of your argument falls. Maybe you would care to continue trying to convince me that people can't be convinced, but at this point I have exhausted what little time I had available for this.
  5. I guess it depends on where your energy comes from. This is the first coin that should require an array of solar panels and capacitors in order to even attempt to be coherent with its mission statement... I mean, I'm all for getting people to stop depending on the birth and death of sentient beings for their protein supply, but I fail to see how cryptomining could particularly help in that regard. Is it being used to fund the development of lab-grown meat and eggs? Other alternatives? Etc.?
  6. Yet, "context" is also precisely what makes people agree when they do. People whose context made them share more similarities are more likely to agree with eachother on certain things. The good news is that it is possible to understand where other people come from, which facilitates dialogue. I may have been unclear in my explanation of what I mean by "context". (You have explained it better than me, see a few quotes below this one.) I could easily answer, "But that's just like, your opinion, man. Tough luck". You see, now my opinion, which "matters greatly" as you have argued, is precisely that "opinions don't matter". Ergo, they don't matter, since if they did, then my opinion wouldn't matter... And so on. There is no other way out of this. Opinions don't matter. Period. It is the only approach to this that makes any logical sense. And arguing against the importance of logical sense would be the equivalent of digging oneself into a bottomless hole. Now, about the sentence itself. Really, I don't "want" this either. Nobody "wants" to have machines making ethical judgements. There is a problem, and it needs fixing, that is all. We're going to have to make them make ethical judgements, because we have the rare opportunity to have them cause less damage than they otherwise could, if things were to go wrong. This is literally the point of the analogy with the trolley problem. A problem which you argue as unsolveable, by the way. Because, if I summarize properly: 1) opinions can't be changed, 2) people cannot see reason, and 3) all opinions are equally valid. Autism is irrelevant in this. Also, I have been diagnosed with Asperger's syndrome, if that is somehow relevant. I mentioned Antisocial Personality Disorder (what was formerly mislabeled as "sociopathy") specifically because it can handicap a person's capacity for moral judgement. Well, yeah, outside of extreme made-up examples, there rarely are obviously superior options. You have to look very closely to see a distinction when there is one, but that doesn't mean there never is any. This is precisely what I mean by "context". This is also what causes similarities in morality. The thing with differences is that they can be communicated and understood. It's not impossible, it has been done before. Maybe some people will never be capable of that, but some definitely are. Coming to an understanding is possible. And necessary in many cases. They still tend to vary less than among the whole of a country, or the whole of humanity. We're trying for an optimal solution. "Optimal" does not mean "ideal", it's merely as good as it can be. Also, my notion of an actual community doesn't get much bigger than about 60000 people. It is closer to municipalities. Study the history of social movements would show you people did and do change, just slower than you want them to. It takes sufficient exposure to the required factors for change (which are indeed slightly different for everybody), and that takes time and a lot of effort (but ironically not effort on the part of the person changing, the context/environment matters at least as much as the self) People change constantly wether they want to or not. Opinions about morals tend to be emotional (and arguably morals are if you distinguish them from Ethics). But that's not what we are talking about, which is an academic field of research. I'm not much of a believer in a distinction between morals and ethics, but my mention of "Ethics" should specifically refer to the inter-subjective peer-reviewed methodical work, not the subjective opinions that absolutely anyone can have. I won't explain the difference between subjectivity and inter-subjectivity here, you can google it if you need to. How would I be wrong? the trolley problem has been an inseparable part of the this thread because it not only was it raised in the article you linked but it classically illustrates why you can't have a machine make such decisions. It is a core part of this thread. I was saying that the point and goal of this thread wasn't SOLVING the trolley problem. You seem to think it was. I shared a news article on AI research pertaining to self-driving cars and ethical algorithms. Then people got their panties in a bunch because "it's questioning muh moral relativism" and such. Such an exaggeration, and not even for allegorical purposes. AFAIK, I am tolerating this comment right now. And you cannot stop me. Two equally rational arguments (cause if they're not of equal worth, there is no question). Well, I guess one of them would have to be picked by the humans who make the device. But since they are ofequal worth ( which was the point of that example), then the choice wouldn't really bother us so much. I mean, we could always opt for multitrack drifting, but it seems like a needless waste of intelligent biomass Have I? Not sure about that. You are making sentences. With meaning. To form arguments. Can't do that without using logical principles. Go check out the relation between epistemology and science. Reject the former and the latter goes out with it. As for the second sentence, yeah, that's true, I agree. I did, however provide a justification for that assumption, which you can find by scrolling up. If it's still unclear, well, I'm sorry. I have spent more time on this thread than I feel I should have. I'm talking about Ethics is an academic discipline, you keep bringing it back to individual opinions. Opinions about ethics aren't the same as Ethics, the same way that opinions about biology aren't the same as Biology. A discipline is something that multiple people collaborate on. It is by definition not individual, it is concurrently elaborated by many disciples. People who think that evolution isn't a thing sure have a right to be wrong, they just don't have a right to claim that their personal uninformed opinion is worth as much as the informed opinion of a community of experts. Believe it or not, there actually tends to be a bit less disagreements among a community of ethicists than there is among a community of evolutionary biologists. But everyone insists on the fact that ethics in particular is so fundamentally different from other fields of academic research, such that any person, without even being involved in the field, would necessarily be equally qualified from birth. Oh, I never said it was easy or simple. I just said it was possible. It is particularly difficult and time-consuming. But as it turns out, it is necessary in many cases, so yeah. Gotta do it. Well then, look up the two authors I mentioned. Otherwise, you're asking me to dedicate hours of productive time to explain something in this thread which is unrelated to both my original post and the topic of this thread. I'm all for debating things on the internet, but I currently do not have the hours required by what you ask for.
  7. Because thinking that people can't be convinced is utterly and completely pointless, regardless of wether or not it would be true. Either a) people can be convinced and thus the relevant problems can be solved, or b) they can't and said problems can't be solved. Only the former leaves room for any potential course of action. Given that there exist such problems, it is unwise to presume that people cannot be convinced in such a way as to allow the resolution of said problems (a presumption which is empirically false anyway, as evidenced by every single individual who has ever been convinced of something or changed their mind.) Besides, if you really thought that people can't be convinced, why are you even bothering to try to convince me of it? It makes no sense.
  8. Al right, let's be clear here that we're talking about something that has to be context-sensitive. We don't know if there really are multiple best answers (and not merely "good" answers), but for the purposes of this, so let's assume from the start that this is the case. Fortunately, multiple answers doesn't necessarily imply multiple answers in the same identicall context. Change the context in any way, and the appropriate answer may change (or it may not, but there's a possibility depending on the difference in context). If you build your system based on taking into account as many contextual parameters as possible (which a machine can do much better than we do, and will do increasingly better with continued improvement), the chances of getting conflicting answers are lowered. Even if we assumed that they couldn't be eliminated, these chances would still be better than with a human behind the wheel. Also, it is entirely possible to design, create and use practical systems adapted to context, in such a way as to respect the differing values and priorities of different communities. The nice thing with AI-driven cars is that they have an input about where they are, and could in theory switch to a different set of values if/when entering a community that hypothetically has a drastically different set of values than the previous one. Except the whole point is to get rid of personal opinions and use our heads instead. We're talking about justification and argumentation. Opinions don't matter. When you can indeed provide a sound justification for a given belief or opinion, the interlocutor ought to at least consider them worthy of reflecting on. It's the justification that matters. Sometimes, people can even recognize the value of an argument, even if they don't agree with it, and perhaps at least tolerate its neighboring existence. Assuming that the "everyone" we're talking about 1) don't have Antisocial Personality Disorder, and 2) fully understand the principles of logic, then there are generally three options : either A) something is rationally ethical for everyone, or B) it can be tolerated as an ethical alternative by everyone, or C) it's not rationally ethical. From my experience living in multicultural communities with people from extremely varied backgrounds, most people generally can come to agree on the things that they can tolerate, and thus that they can have in common in their respective ethical frameworks. But they almost never initially agree on these things. There has to be a dialogue, an exchange that serves as a way to examine and scrutinize the other's discourse. People can be convinced, or at least convinced to tolerate. And what is tolerated by people of a given community can be tolerated by AI designed to function within that community. It isn't self-evident. And there aren't always differences of opinion. They are just very common because different people are in different contexts. But generally there are actually more similarities than differences. For example, very few people would argue that an ethical kill while hunting involves skinning the prey alive, and the justifications provided by those who would argue this are generally bullshit and/or questionable from a mental health perspective. Now, most real cases of disagreement aren't as clear-cut as this exaggerated example, but the takeaway remains roughly the same. Another important point is that relativist assumptions serve no practical purpose and bring us nowhere closer to a position of knowledge. If any given mutually-exclusive opinions were necessarily equally valid and true, then the opinion that they aren't is also equally valid and true... Relativism inherently implies the truth of its own falsification, and is therefore untenable. Besides, we'd also have to throw science as a whole out of the window if we were to defend relativism. There are plenty of reasons for not doing it. In reality, what one person considers to be factually true another also may not; that doesn't mean that both are equally correct. What one person considers to be logically valid, another also may not; that doesn't mean that both are equally correct. It is a fact that people's opinions can (and often are) incorrect. We shouldn't go out of our way to adapt our theories so as to not exclude erroneous beliefs. The point isn't wether or not a conclusion is in line with people's personal opinions, or wether those opinions have intrinsic value. They don't. "Not until they are brought before the tribunal of reason", as an old german would say. You are right in thinking that the Trolley Prolem has not been resolved, especially not in this thread. But I think you are wrong in thinking that this would somehow have been the point of this thread, or even a reasonable expectation for this thread. Either way, arriving to an appropriate solution would mostly be done by convincing people. Which is a lot of work, but possible. By that I mean that it's measured in years. Humans are malleable in the long term, but stubborn in the short term. As for my opinion (and it really is just an opinion so far), it is that, in the mean time, communities should decide what non-ideal outcomes they prefer within the confines of their own community. I dislike that this does resort to majority rule in the case of communities that prefer to resort to this rule, but hopefully we'll work out (a) better alternative(s). Deciding on a policy for this at the community level seems more productive and relevant than making it an individual choice. There are individuals who aren't in a condition to be able to distinguish "better" from "worse", and whose choices would be made with too little concern for others. IMO, clarifying the sets of values that different communities give themselves would also be a good way to start making more sense of this. Not in a way that would encourage people to move to a different community if they have different values, but rather in a way that would encourage dialogue and exchange to prevent the tyranny of the majority. For this part, I completely agree with you. Hence, community-level acceptance is the first goal to aim for. Universal acceptance, we'll get there if possible/necessary. I don't think it's necessary. You don't think it's possible. I don't even think it's an immediate problem, unlike community-level acceptance. Because cars can already have location-restricted programming. I get the fact that individual humans in general aren't very rational. And that we do have biases. Still, I think you have it backwards, that it is rational conditions which transcend subjectivity and bias. It is also a fact that humans are capable of using reason. We are also capable of improving in our use of reason, to use it better and more often. I don't care much if people still disagree when using reason, as long as the disagreement is between rational arguments. A rational argument, even one that we personally don't want to agree with, is still pretty much always preferable to unjustified beliefs. Indeed. As a humble undergrad student, I don't feel like I'm necessarily right about this, so I'm not confident in calling this more than mere opinion, but here goes: (N.B.: My answer here will probably be very superficial, and filled with holes due to not having a lot of time to dedicate to refining forum posts. So just bear with me unless it stops making sense.) In any case, we first we have to define what we mean by "right" and "wrong": - Being right is having a valid belief that has an optimal justification compared to that of all other available alternative beliefs, and which is not proven false. (Note: "being right" does not equal "knowing". One can be right by accident, and rightness can be refuted if proven otherwise, such as if one comes up with an even better justification. In that case the new belief will be optimal.) (The ever so important contextualisation comes from the "all other available alternative beliefs") - Being wrong is having a belief that is either proven false, proven invalid, or proven to have a sub-optimal justification compared to available alternatives. I.e., it's a spectrum, there are degrees in being wrong. Most people are wrong (perhaps all of us are), but some moreso than others. Obviously, we should strive to be less wrong, as much as we can. As for actual ethical systems, I personally favor a slightly unusual mix of context-sensitive deontology and utilitarianism, in which an utilitarian justification is what ultimately supports any maxim or axiom, while employing maxims and axioms to great extent due to their usefulness. I honestly don't think that my personal "ideal" ethical system is sufficiently developped for release, if ever; consider it more of a beta. I also don't think it's relevant to this thread, so I won't bother going into more detail for now. In the mean time, I'm fine with referring people to Peter Singer's utilitarianism, Eliezer Yudkowsky's bayesian rationalism, etc. I strongly suggest reading Yudkowsky's stuff (particularly the publications for the "LessWrong" community) for anything related to the specific concept of "rationality" that I may have failed to explain clearly enough in this thread. Well, you bothered applying them so far. I'd say the evidence is racking up against you there. I guess we could say that about anything, the "does not apply" seems purely arbitrary. How another logically determines that the Earth isn't flat wouldn't apply to me because something makes me refuse to listen to reason? Of course it applies. People can very well not listen to reason, although doing so in such a case means being wrong. And people can very well be wrong regardless of wether or not they realize that they're in the wrong. Well, yeah, debate is how we use reason as critical thinking to differentiate sound arguments from worse ones. Debate is a good thing. Debate is how any discipline that seeks knowledge or know-how can manage to advance beyond initial disagreements. I don't understand what you mean by this. In sociology, oppression is a rather clearly-defined concept, and oppression is empirically identifiable when and where it is performed and experienced. That people experience a feeling that they associate with being oppressed doesn't tell us much about wether or not they actually are. Also, it's never 100% of the members of a society who are going to be oppressed. It is a sociological fact that not everyone can be oppressed, just that a lot of people are or have been. The fact that people can be contradictory does not mean that the non-contradiction principle should be discarded as a logical law. In order to verify or falsify the laws of logic, one must resort to using logic as their weapon, an act which is self-defeating. Unless you meant something else by "the logic"? "Justification", perhaps? In that case the sentence would make sense. But I probaly just misunderstood what you meant by that. Well, capitalism is more of an economic theory, not so much a moral or ethical framework. Most ethicists who defend capitalism are utilitarianists, but a large proportion of those who defend anti-capitalist alternatives (of which there are many) are also utilitarianists. And a lot of ethicists who defend capitalism base their arguments in social-focused ethics; capitalism would be socially-justified on the basis of its alleged superior benefit for society as a whole compared to that of alternatives (hence why they talk about capitalist societies, with norms and methods of regulation. Capital can't even possibly exist without a structured society). The crux is mostly in wether or not the factual premises are true, because the argumentation itself can be valid. Indeed, of course not. Especially since these preferences and their origins can be logically explained. I did say many times that it's important to identify and account for implicit biases and beliefs which are not held from a rational justification.
  9. Hence why context is important. And why I've been putting plurals in parentheses all the time when talking about "most rationally-justified answer(s)". It is entirely possible for different solutions to be most applicable in different contexts. I agree with you on this, even though I have to admit that we don't yet know for certain wether or not ethical conundrums can or cannot be resolved to a single solution. The assumption of the negative should be seen as a precaution. I don't subscribe to the notion that majority rule is good enough. I do think that rational thinking can be employed to resolve ethical questions for specific contexts. We don't know that it can't, and it's franky the only tool we have to make sense of things and obtain knowledge about anything. It's as good a tool as any to solve problems that indeed need solving. It's good anough to know facts about the world, and it's good enough to direct our methods for understanding the world. I don't think it's too far-fetched to work under the presumption that rationality might be good enough to figure out what should be done and what shouldn't be done by a given ethical agent a given context. There are actually indeed rational outcomes, and have been for at least a couple millenias. The thing is that most of the ancient ones are not contextually applicable or justifiable anymore, and we rarely have these: most rationally-justified framework(s), only a bunch of sufficient ones. There have been "most rationally-justified" frameworks in certain historical contexts, but they always worked on premises that turned out to be factually erroneous, and thus inapplicable now that we know that those premises turned out to be false. The problem is more often with factual basis, because the reason and logic part has actually been pretty good for the last 2400 years at least. Ethical theories don't have to be universal, eternal grand unifying theories (though it'd be great if that's possible) they just have to be most appropriate (or equally most appropriate) for the specific context until said context sufficiently changes. I don't know. I can't predict the future. But apparently you can. Assumptions are fine as a methodological precaution, but they can't be treated as knowledge. Well, I can certainly disagree with that. It is entirely possible that people can be wrong (regardless of wether it's the majority or minority). And people can, in fact, make flat-out erroneous or invalid judgements; but usually they aren't completely wrong, and rarely completely invalid, to a varying degree. The point is to strive to be less wrong. The thing with judgements and reasonings is that their justification can be examined and scrutinized. Reasoning, when subjected to scrutiny and examination, allows to differentiate arguments on the basis of 1) their validity and 2) their factual truth. It is rarely clearly or explicitly the case, but in principle, some are more wrong than others, and some reasonings are less sound than others. It isn't necessarily impossible to differentiate things, it's just arduous. Also, yeah, it's entirely possible for opposed reasonings to be both equally valid and equally-based on equally-true premises. But that's extremely rare, and also very arduous to differentiate. Finally, this has nothing to do with the popularity of an opinion or position, nor does it have anything to do with majority rule. The principles of Logic don't take sides.
  10. Hence why they are also working with engineers. Multidisciplinary research is an amazing thing. I'm pretty sure everyone agrees on this. It's just that the algorithm for "what to do in the mean time" is going to play an important part until we get there. We may be a long time away from having a completely failure-safe autonomous vehicle system, whereas ethical algorithms are already a thing. They could already save many lives right now, perhaps even your own or your loved ones'. I mean, if we had perfect things, the fields of Ethics and Engineering would be much simpler, and less concerned with immediate problems. I'm positive that the majority of ethicists would be relieved and pleased to find themselves out of a job, or at least to have their work cut out for them. But that might never happen.
  11. The realm of practicality is technics, not science. Technical fields aren't properly sciences either. It's the difference between biology and medicine, or between physics and engineering, or between sociology and social work. Philosphy is a whole other thing, it's meta-scientific. Epistemology is meta-scientific because it's what directs science. Logic is meta-scientific because it's what directs resoning. Ethics is meta-scientific because it is a discourse on how things should be improved, not about how they are. They're not scientific because they're not discourses on facts about the world, which is what scientific discourse is. The level of philosophical discourse is that of a discourse on discourses about facts. Merely "not being a science" isn't a problem if the goal isn't to do science's job of obtaining facts about the world. Philosophy uses scientific knowledge, it doesn't produce it, and does not claim to produce it either. This whole thing isn't about obtaining facts about what self-driving cars are. Ethics is precisely a discipline of rational justification. Opinions, knee-jerk reactions, sophistry and implicit biases may be major factors in popular opinion, but they aren't significant factors in ethics research, and should not be. Arguments are. A major point of ethical research, as an academic discipline, is to detect implicit bias, take it into account, and either eliminate it or work around it. The whole academic process is centered on the identification of biases and fallacies. This isn't just some newspaper's editorial section. Also, you seem to be under the presumption that it is somehow a priori impossible to arrive at a most rationally justified answer (most rationally justified until proven otherwise, of course). These things first have to be constructed if they're ever going to be "found"; theories don't exist as facts in nature, they have to be made, and then presented to people in a convincing argument. My wording may have been misleading, a more correct interpretation would be "to arrive at (a) most rationally justified answer(s)". (Excuse me, English isn't my mother tongue.) The takeaway is that, if you find one or more most rationally justified positions on a topic, people cannot rationally disagree in favor of sub-optimal alternatives. As far as the sake of research goes, it doesn't matter if people disagree on it for whatever other reason, the alternatives are still going to be rationally worse options. And we obviously should not make decisions based on rationally worse options if we can possibly help it. The whole point of Ethics is to optimize choices and decision-making. So far, no convincing proof or agrument exists that would rule out the possibility of optimization, so the potential benefits for humanity is at least worth looking into. Really, I would rather people talk about the actual article and the research mentioned rather than talking out of their arse about the very value and goals of an academic discipline of which they may not have a working knowledge.
  12. Now, we need to differentiate "action which can be reasonably justified as more likely/guaranteed to improve the state of things in some way" and "action which is more popular". They aren't always the same, even though they sometimes are. In most countries, AV heuristics will be at least set guidelines via legislation. Research such as this is funded by entities like the National Science Foundation to figure out what would be the best practices in AI heuristics. I.e. the goal is to help solve potential or existing problems. Automobile manufacturers are those who specifically care about what sort of products individual consumers want. But "what consumers want to buy" may have to be restricted by "what can justifiably be available". I mean, there are reasons why individual consumers aren't allowed to buy artillery ordinance for home defense, and still would not be allowed to, even if that's what all consumers wanted. This is an exaggeration, mind you. But the point is that somewhere along the line a prescriptive reasoning is made, and that's "doing ethics", regardless of wether or not the title of the person(s) doing that reasoning is "philosopher", "ethician", "judge", "senator", "citizen" or whatever else. I mentioned the relevancy of a degree in Philosophy in this context because it is specifically meant to train people in the methods of justification and rational discourse, with regard to discourse on facts about the world (scientific), and discourse on the way we discourse about facts about the world (meta-scientific).
  13. Having people agree on things isn't the immediate point of this. People's differing opinions are also partly caused by many things other than rational justification. Here the point is finding the most rationally justified answer(s). The part about convincing people to change their mind is a whole different thing. - We have no reason for believing that finding a "most justified answer" would somehow be impossible. The principles of rationality aren't exactly subjective. - We only have reason to doubt that people would initially unanimously accept it, if there were to be such an answer. No reason to dismiss the very possibility of it being widely accepted. Also, this is not at all "asking again". It's not even "asking people".
  14. I did read the guidelines before posting. I don't know what to do then. The post a) starts with my own input, b) links to a relatively reputable source, c) quotes important bits but not the whole article, and d) makes use of quotation marks for all non-original text. Is it too much of a wall of text? Should I use a full-on quote box instead of just the quotation marks? This. @SauronI'll attempt an answer to this as such : it is a matter of levels of discourse. (Tl;dr version at the end) Academics in the field of Philosophy are specialised in the argumentation and justification of theses. A philosophical discourse is a discourse on a discourse; such as "how we think about things", and what differentiates sound reasoning from unsound reasoning. "Philosophers" (and by that I generally mean academic experts, not most self-titled writers) are effectively more qualified to deal with matters pertaining to this specific field. Even though pretty much all humans have to do a bit of this in order to function, and are as such at least a bit familiar with the principles of reasoning, professors of Philosophy are the people whose job it is to be experts about it. Like how a biologist's job is to be an expert in biology (or rather, in a specific branch of biology). Similarly, professors of Philosophy are specialised in specific branches of the discipline. Explaining it further requires explaining the difference between contemporary Philosophy and the contemporary Sciences (but I won't touch on why we stopped calling the Sciences "natural philosophy" and stopped calling Philosophy "science" singular). Philosophy exists in academia as a meta-scientific field. There are a few other meta-scientific fields, which all cover specialized types of inquiry. Academics in the field of Philosophy are experts in branches of that field, which include (not exhaustively) the branches of Logic, Epistemology (the study of knowledge and how to obtain it), Ontology (how to differentiate things that are and things that aren't) and Ethics. As such, Philosophy is a specialised meta-scientific field which is tasked with the stuff that directs and oversees the development of Science (among other things). While many professors of philosophy are philologists (interpretative historians whose work pertains to the History of ideas), many also do active research pertaining to contemporary subjects, such as "how can we know that our method for figuring out «whether the Large Hadron Collider poses risks or not» will be reliable". As such, it is thanks to (and because of) epistemologists that we can be fairly certain that "we are correct in believing that the Large Hadron Collider is safe to use within the parameters that have been set" is true. From my (imperfect) point of view (as an undergrad student), contemporary ethics works a lot like epistemology. It is also about "how can we know what we want to know", and "how can we know when/if we know what we want to know"; in which case the "knowing" is about "what choice(s) would best be made in X context". (Deontologists basically just take X as a constant more often than not, so I'll speak from the context-sensitive angle since it covers everything.) If your field has terms like "best practice" or "deontological code", that's the product of prescriptive reasoning, not unlike the products of research in Ethics (and it sometimes directly IS the product of the work of "philosophers"). Tl;dr: Ethicists are academic experts in providing (usually context-sensitive) frameworks for prescriptive reasoning. This makes that work both relevant and important for many, many things related to a) scientific research, b) technological development, c) legitimate uses of power, d) solving problems, etc. As for the half-million dollard in funding: Evans has won a three-year, $556,650 National Science Foundation grant to construct ethical answers to questions about autonomous vehicles (AVs), translate them into decision-making algorithms for AVs and then test the public health effects of those algorithms under different risk scenarios using computer modeling. He will be working with two fellow UML faculty members: Heidi Furey, a lecturer in the Philosophy Department, and Asst. Prof. of Civil Engineering Yuanchang Xie, who specializes in transportation engineering. The research team also includes Ryan Jenkins, an assistant professor of philosophy at California Polytechnic State University, and experts in public health modeling at Gryphon Scientific." Source: https://www.uml.edu/News/stories/2017/SelfDrivingCars.aspx
  15. On the topic of "jobs and fields that won't be replaced by automation", a philosophy degree is apparently still going to be relevant for a while. https://qz.com/1204395/self-driving-cars-trolley-problem-philosophers-are-building-ethical-algorithms-to-solve-the-problem/ «Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels. The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?» «Rather than pontificating on this, a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.» «he hopes the results from his algorithms will allow others to make an informed decision, whether that’s by car consumers or manufacturers. Evans isn’t currently collaborating with any of the companies working to create autonomous cars, but hopes to do so once he has results. Perhaps Evans’s algorithms will show that one moral theory will lead to more lives saved than another, or perhaps the results will be more complicated. “It’s not just about how many people die but which people die or whose lives are saved,” says Evans. It’s possible that two scenarios will save equal numbers of lives, but not of the same people.» «One of the hallmarks of a good experiment in medicine, but also in science more generally, is that participants are able to make informed decisions about whether or not they want to be part of that experiment,” he said. “Hopefully, some of our research provides that information that allows people to make informed decisions when they deal with their politicians.»
×