I’m still trying to do some catchup work from having activities going on this past week. The following boost just made me wonder what companies like Microsoft and possibly others are doing.
Imagine taking a picture and asking if if the food as an example is spoiled. The Language learning model looks at the picture, tells you that the food is OK and you get sick because it was just that bad.
Here is the boost.
jazzyjennifer: Boosting Aaron (hosford42): I am really, really, REALLY irritated by what I just saw. The #ImageDescription function of Microsoft’s #Bing is outright lying to people with vision impairments about what appears in images it receives. It’s bad enough when an #LLM is allowed to tell lies that a person can easily check for veracity themselves. But how the hell are you going to offer this so-called service to someone who can’t check the claims being made and NEEDS those claims to be correct?
How long till someone gets poisoned because Bing lied and told someone it was food that hasn’t expired when it has, or that it’s safe to drink when it’s cleaning solution, or God knows what? This is downright irresponsible and dangerous. #Microsoft either needs to put VERY CLEAR disclaimers on their service, or just take it down until it can actually be trusted.
#Blindness
#VisualImpairment
#Accessibility
#AccessibilityMatters
#Disability
#DisabilityRights
#CorporateResponsibility
#LargeLanguageModels
#MoralHazard
This is outright something we need to see if we can take Microsoft and others to task. It should be as accurate as possible. I am not saying its going to be 100 percent correct, but this bad? This … is just going to get interesting. Now, let’s see how we can get these companies to pay attention to this important thing I just saw here.
Discover more from Jared's Technology podcast network
Subscribe to get the latest posts sent to your email.