IT'S TIME TO GIVE THE AI DOCS A GO! WHO NEEDS THE CORRUPT W.H.O.?
AI DOCTORS AND STAFF TO THE RESCUE.
With the coronavirus pandemic in March of 2020, the Trump administration requested that changes be made in the judicial system in anticipation of judicial system delays or even a shut-down.
On April 14, 2020, President Trump announced the defunding of a corrupt and politically biased WHO.
The WHO had asked for an additional $675M in addition to the $560M or so they already got from the USA for one year. Paying over a $1B to an organization that is obviously corrupt, inefficient, and politically biased is not smart, nor proper resource allocation for the planet.
This is a prime example of the failings of planetary leadership by humans alone.
A world health organization centered around artificial intelligence and deep learning is the only permanent answer, and ultimately would provide the best results at the lowest cost.
We are talking about the possibility of saving millions if not billions of lives.
In my Deep State series, I recommend streamlining and digitizing our medical system so that artificial intelligence such as machine learning (ML) can eventually produce automated research and smart medical treatment on large scales.
Such a remedy could also apply to doctor visits and reporting, handle most of the preliminary drudge work in all of the medical system, and eventually render medical care in a fair and impartial manner on massive and cost-efficient scales.
Every doctor on the planet would benefit by an AI Assistant or Partner.
In pandemics, a system of automated and remote care could greatly reduce rates of infection.
The system could be set up to maintain high employment levels of human medical personnel, promoting them to tasks that where only humans excel. A warm, friendly, and caring human nurse or doctor, so important for the healing process, will be hard to replace with AI.
But as far as reviewing previous records, current research, drug interactions, testing suggestions and protocols, and diagnosis and treatment plans, and possible outcomes, non-augmented humans cannot compete with AI.
Nor can humans possibly track a pandemic as well.
AI docs are the only way to provide basic but state-of-the-art healthcare to billions of the needy in a cost efficient manner.
In my Deep State series, I also recommend streamlining and digitizing our judicial system so that artificial intelligence such as machine learning (ML) can eventually produce an automated and smart legal system. Such a remedy could also apply to police work and reporting, handle most of the preliminary drudge work in all of the judicial system, and eventually render justice in a fair and impartial manner on massive and cost-efficient scales.
Artificial intelligence can serve to help make the performance of judges in rulings less biased. AI can predict which judges are likely to be biased and allow them to reconsider to reduce or even drop their bias.
Researcher Daniel Chen, a lawyer, and a Ph.D. in economics proposes AI to correct biased decisions of human judges.
Collecting court and judge decision data for years revealed judge behavioral bias in many cases. AI combined with large datasets could predict a judge’s biased decisions, alert them and others for oversight, and push them towards a more fair sentence.
A common error is a judge making an inappropriate decision on sentencing arises from what is known as the gambler’s fallacy. If a judge denies asylum too many times in a row, he or she might worry that they have become too lenient and, to auto-correct, deny asylum for the next case regardless of its merits and facts. The rulings on the previous cases wrongly affect the ruling on the current case as an extraneous influence.
Circuit courts show presidential election cycle bias. In an election period, there is more disagreement and judges follow along partisan lines.
Applying machine learning to determine early predictability on asylum cases revealed that the judge’s ruling could be ascertained even before the case opened based on the nationality of the asylum seekers and the judge’s identity. The facts of the case were irrelevant. Chen says the judges may be favoring snap judgments and heuristics over facts.
Opinion: judgment, before the facts are even presented that is tied to nationality, sounds like country or racial bias.
Chen suggests that the judges be notified when this bias might occur and ask them to spend more time deciding and to reduce bias.
Opinion: if the judge is biased or racist, I don’t see how just telling them to take more time would help unless there was a threat to put them under review. “I hate people from Japan because I am Korean” is not going to change with a few hours thinking about it.
Chen discussed some upsetting facts. Louisiana football losses affect judge sentencing and so do defendants' birthdays.
Opinion: it is obvious that proper justice and sentencing are not being served in many cases based on bias, a basic human flaw that stems from our many brain-bugs and the fact that emotion often trumps reason. The entire judicial system needs reform.
Chen suggests possible training programs for judges. Just letting them know their biases helps, Chen claims. Applications and research regarding decision-making algorithms are on the rise.
Opinion: a hybrid system will evolve between AI and machine learning and their human counterparts in regards to the judicial system to make vast improvements both in proper justice and the capability to handle many more cases more efficiently. A corrupt judicial system came into focus in Chicago with the Jussie Smollett case. (see book below on the entire Smollett case)
The entire system should be reformed, and beginning with police reports, all data recorded in a fashion to facilitate uploading and the evaluation of every case.
Berkman Klein Center at Harvard University engages with government officials to “facilitate learning and idea-sharing around emerging AI issues.”
Our judicial system is broken and the reasons include corruption, racial bias, and too many antiquated or illogical laws. Humans are flawed and most cannot be impartial, so why not reduce their participation in the judicial process? By replacing human-biased decisions with objective algorithms from AI, we can snuff-out judicial bias. Machine learning and humans can form a collaborative team of technical objectivity and human experience--AI Hybrid Justice and, eventually, Law.
One study showed that applying machine learning in courtrooms reduced jail populations significantly.
With efficient and just AI sentencing we could bring courtroom mystery out of the closet and show the public how the legal reasoning came to be and exactly why the judge selected a particular sentence and was not influenced by whether it was the defendant’s birthday or a presidential election is coming up, or whether their favorite sports team won or lost on Sunday.
AI could restore trust in judicial systems worldwide by providing oversight, process monitoring, displaying the decision-making process, demonstrating control over accountability, and pointing out to judges when they are acting in a biased manner.
With proper development and funding, natural language processing (NLP) and machine learning (ML) could construct informative legally fluent bots. These continuously-updated legal bots, familiar with all laws and regulations, even new ones, could not only consult with citizens but with lawyers as well.
The states of Utah and Virginia, according to the New York Times, have applied behavioral sentencing algorithms, and for Virginia, for over a decade.
Current algorithms can show bias based on how they were trained or programmers’ tweaks and it is hard to show exactly how they made their decision.
Opinion: this can change by testing for bias and retraining and making systems that show their reasoning.
According to Science Magazine, AI judges have already beaten humans at predicting Supreme Court decisions compared to humans. One study scored AI with 83% accuracy.
Such AI models could be used to reverse-engineer Congressional bills, for example. The bill authors could change parameters and then run it in the model to see if it would pass or fail, and then repeat the process until they had what they needed.
A properly designed Digital Judicial System would revolutionize the entire court system.
Just like IBM’s Watson can analyze thousands of new cancer research scientific papers each day, assimilate the information, and apply it to specific cancer cases to make a diagnosis, prognosis, and treatment plan for a patient, legal AI can do the same for court cases.
Legal AI can study hundreds of thousands of court cases, if not millions, and take on thousands more and it won’t forget one word as it applies the law.
Data standards would facilitate entry into machine learning programs and case analysis.
The Jussie Smollett case in Chicago was administered using “alternative justice” that was implemented because of the huge backlog of cases. A public uproar ensued after three weeks of detective and police work, and a grand jury to produce 16 felony counts of disorderly conduct, were all for nothing because the prosecutor cut a deal with Smollett’s lawyers to drop all charges for $10,000 forfeited bail and 16 hours of community service and then sealed the case.
Alternative justice puts all of the power in the hands of prosecutors who are paid additional fees if they can persuade the defendant to accept it. Charges can be reduced or dropped altogether subject to the prosecutor’s whims or political agenda and thus bypassing juries.
This is not justice. It is a sham.
A digital justice system run by machine learning and inference could also bypass juries but administer justice with far less human error and bias. It could reduce the presence of corrupt prosecutors in the system, and all prosecutors, judges, and juries, in general, saving billions if not trillions of dollars annually.
On March 19, 2019, Wired.com ran a story on Estonia, a world leader in digital government, and their ambitious “robot judge” project. The AI judge would adjudicate small claims disputes less than $8,000 to clear the current backlog of cases for clerks and judges.
Recently launched, the AI project will start with a pilot program centered on contract disputes. The parties will upload documents and anything else relevant and the AI judge will issue a decision that could be appealed to a human judge. The system will undergo adjustments based on feedback from lawyers and judges.
Other countries have used AI for sentencing but not so much for judicial decisions.
A coordinated AI effort across the federal government in the USA has gone slowly because federal databases in each agency are different and are not easily shared.
Opinion: So make the databases shareable.
Another reason for resistance or roadblocks in the USA for Digital Justice and Administration is that there is no national ID system and Americans fear “Big Government” and there may be Constitution-based challenges to fully automated decision making by a government agency.
Opinion: Implement a national ID system needed for voting anyway. We have passports and state driver’s licenses, what’s the big difference? I don’t think Americans would fear Digital Government as any bigger than the “Big Government” that already violates many of their privacy rights and freedoms. If it was ruled to be unconstitutional to have an automated judicial system, even with decisions that could be appealed, we would need a Constitutional amendment. Don’t forget we can reverse-engineer appropriate bills and amendments using AI to predict Supreme Court pass or fail decisions.