Search
  • SCOTT CAMPBELL: WHITE HAT

SHOOTER RED FLAGS AND BIG BROTHER BLUE TO THE RESCUE

Updated: Aug 8, 2019


Big Brother is here and getting bigger whether we like it or not. So at least let’s train him to behave, respect our privacy, and help all of us.


Dear Walmrt: How about closing and locking doors BEFORE a shooter comes in?

>DO WE NEED A UNIVERSAL BIG BROTHER?


I was watching a crucial NBA playoff game that was tied and there was a timeout called with six seconds left. When play resumed, a foul was called one player as time ran out. Film analysis over the next couple of days showed that six fouls had been committed in that short time, three by each team, and a couple were subtle, but still a foul. It was impossible for three referees watching ten players spread out over 5,000 square feet of playing space to catch the first foul committed and blow a whistle.

Another problem is the potential for referee bias so that even if a foul is seen it may not be called. Basketball referees served time for throwing games.


My point is that we Humans not only have limited abilities for perception but we can be biased or corrupted as well.


A fourth AI basketball referee would have detected the first foul, blown a whistle, and the game would proceed, knowing that the outcome was fair, and the best team won.


After 9/11, I wrote that the only way to stop terror attacks on large scales would be to have a form of Big Brother surveillance network. The government took advantage of 9/11 to start one under the Patriot Act- the NSA.


We can argue all day about privacy versus safety, but the government has already usurped our privacy because, like with the Spygate and undeserved FISA warrants, they can skirt the law knowing that they will likely get away with it.


There are ways, like using metadata, to protect privacy with digital tech and not unmask identity for the innocent and law-abiding citizen, but to expose and catch a criminal. We have already given up our right to privacy in many public and private arenas. If we want society to catch the person who is going to rob the convenience store at gunpoint in an hour, we have to give up our identity when we buy a pack of gum. We have laws to help us be protected from the store employee or store owner releasing our identity to the public just for the heck of it.

To be safe, we will have to give up our street identity to the government, if we want Robocops to come and save us from crime and anarchy.

Digital government calls for the codification of all law into digital forms amenable to proper interpretation through deep learning and inference. When you combine evolving surveillance techniques with a digital “police force” backed up by a fully digitized library of laws, then crimes could be spotted and evaluated as they start to occur, and a request for human assistance executed.

At some point, like in Minority Report, the “Benevolent Big Brother” system could predict where a crime was likely to occur and take action to prevent it. A starting point could be simply letting all possible perpetrators know they are being monitored. The same strategy might stop a crime in progress as well as provide evidence. Banning facial coverings would help.

I was in a situation once where I was likely to be beaten up by two angry young men because of the color of my skin. As they circled me and started to move in, I pointed up to a nearby security camera and said:


“This is being filmed, you know!”


They looked up at the camera, stopped, and walked away, giving me a few more dirty looks. I did not know if the camera pointed in our direction, but I did not bother to find out. I got into my car and left.


>El Paso Walmart shootings and Red Flag legislation


Sometimes tragedy catalyzes a societal solution.


A lone gunman killed 20 people started shooting in the parking lot of a Walmart before entering the store to continue. This mass shooting would be followed by another one within 13 hours in Dayton, Ohio.

Some form of Red Flag law was proposed by Congress in which shooters likely to commit such crimes would be identified beforehand and prevented from possessing or buying firearms.

A Digitized Judicial and Law Enforcement Department as part of local, state, and Federal Government would help prevent crime, identify crime in process, and prosecute crimes that had already been committed.


Only with advanced surveillance technology can we effectively combat crime. An armed shooter exiting a car in a public space would trigger an immediate surveillance amplification and call to action for law enforcement. For example, as law enforcement and local establishments are contacted, a swarm of camera drones circling the shooter would not only provide evidence but might help to stop the assailant--just like when I pointed out to my assailants that we were being filmed. A swarm of video drones would warn others of danger. With Digitized Law in the Cloud, a possible crime could be identified immediately and classified and prioritized for a response that would be coordinated with available resources.


AI JUSTICE


The Verge ran an article in March of 2019,on how artificial intelligence can help us make judges less biased. AI can predict which judges are likely to be biased and give them the opportunity to reconsider to reduce or even drop their bias. Researcher Daniel Chen, a lawyer and a Ph.D. in economics, proposes AI to correct biased decisions of human judges.

Collecting court and judge decision data for years revealed judge behavioral bias in many cases. AI combined with large data sets could predict a judge’s biased decisions, alert them and others for oversight, and push them towards a more fair sentence.


A common error is a judge making an inappropriate decision on sentencing arises from what is known as the gambler’s fallacy. If a judge denies asylum too many times in a row, he or she might worry that they have become too lenient and, to auto-correct, deny asylum for the next case regardless of its merits and facts. The rulings on the previous cases wrongly affect the ruling on the current case as an extraneous influence.


Circuit courts show presidential election cycle bias. In an election period, there is more disagreement and judge along partisan lines.


Applying machine learning to determine early predictability on asylum cases revealed that the judge’s ruling could be ascertained even before the case opened based on the nationality of the asylum seeker and the judge’s identity. Facts of the case were irrelevant. Chen says the judges may be favoring snap judgments and heuristics over facts.

Opinion: judgment, before facts are even presented that is tied to nationality, sounds like country or racial bias.


Chen suggests that the judges be notified when this bias might occur and ask them to spend more time deciding and to reduce bias.


Opinion: if the judge is biased or racist, I really don’t see how just telling them to take more time would help unless there was a threat to put them under review. “I hate people from Japan because I am Korean” is not going to change with a few hours thinking about it.

Chen discussed some upsetting facts. Louisiana football losses affect judge sentencing and so do defendants birthdays.


Opinion: it is obvious that proper justice and sentencing is not being served in many cases based on bias, a basic human flaw that stems from our many brain-bugs and the fact that emotion often trumps reason. The entire judicial system is in need of reform.


Chen suggests possible training programs for judges. Just letting them know their biases helps, Chen claims. Applications and research regarding decision-making algorithms are on the rise.


Opinion: a hybrid system will evolve between AI and machine learning and their human counterparts in regards to the judicial system to make vast improvements both in proper justice and the capability to handle many more cases more efficiently. A corrupt judicial system came into focus in Chicago with the Jussie Smollett case. (see book below on the entire Smollett case)


The entire system should be reformed, and beginning with police reports, all data recorded in a fashion to facilitate uploading and the evaluation of every case.


Berkman Klein Center at Harvard University engages with government officials to “facilitate learning and idea-sharing around emerging AI issues.


Our work on algorithms and justice (a) explores ways in which government institutions incorporate artificial intelligence, algorithms, and machine learning technologies into their decision-making; and (b) in collaboration with the Global Governance track, examines ways in which development and deployment of these technologies by both public and private actors impacts the rights of individuals and efforts to achieve social justice. Our aim is to help companies that create such tools, state actors that procure and deploy them, and citizens they impact to understand how those tools work. We seek to ensure that algorithmic applications are developed and used with an eye toward improving fairness and efficacy without sacrificing values of accountability and transparency.


MEDIUM.COM ran an article on April 13, 2018, entitled “AI is entering the judicial system. Do we want it there?”


The article points out that most agree our judicial system is broken and the reasons include corruption, racial bias, and too many antiquated and downright weird laws like making your black cat wear a bell on Friday the 13th in Indiana. The author points out that humans are flawed so why not reduce their participation in the judicial process? By adding objective AI algorithms we can reduce judicial bias. Machine learning and humans form a collaborative team of technical objectivity and human experience.


The author claims that one study showed that applying machine learning in courtrooms reduced jail populations significantly.


With efficient and just AI sentencing we could bring courtroom mystery out of the closet and show the public how the legal reasoning came to be and exactly why the judge selected a particular sentence and was not influenced by whether it was the defendant’s birthday or a presidential election is coming up, or whether their favorite sports team won or lost on Sunday.


AI could restore trust in judicial systems worldwide by providing oversight, process monitoring, displaying the decision-making process, demonstrating control over accountability, and pointing out to judges when they are acting in a biased manner.

With proper development and funding, natural language processing (NLP) and machine learning (ML) could construct informative legally fluent bots. These continuously-updated legal bots, familiar with all laws and regulations, even new ones, could not only consult with citizens but with lawyers as well.


The states of Utah and Virginia, according to the New York Times, have applied behavioral sentencing algorithms, and for Virginia, for over a decade.


Current algorithms can show bias based on how they were trained or programmers’ tweeks and it is hard to show exactly how they made their decision.


Opinion: this can change by testing for bias and retraining and making systems that show their reasoning.


According to Science Magazine, AI judges have already beat humans at predicting Supreme Court decisions compared to humans. One study scored AI with 83% accuracy.

Such AI models could be used to reverse-engineer Congressional bills, for example. The bill authors could change parameters and then run it in the model to see if would pass or fail, and then repeat the process until they had what they needed.


A properly designed Digital Judicial System would revolutionize the entire court system.

Just like IBM’s Watson can analyze thousands of new cancer research scientific papers each day, assimilate the information, and apply it to specific cancer cases to make a diagnosis, prognosis, and treatment plan for a patient, legal AI can do the same for court cases.

Legal AI can study hundreds of thousands of court cases, if not millions, and take on thousands more daily and it won’t forget one word as it applies the law.


Data standards would facilitate entry into machine learning programs and case analysis.

The Jussie Smollett case in Chicago was administered using “alternative justice” that was implemented because of the huge backlog of cases. A public uproar ensued after three weeks of detective and police work, and a grand jury to produce 16 felony counts of disorderly conduct, were all for nothing because the prosecutor cut a deal with Smollett’s lawyers to drop all charges for $10,000 forfeited bail and 16 hours community service and then sealed the case.


Alternative justice puts all of the power in the hands of prosecutors who are paid additional fees if the can persuade the defendant to accept it. Charges can be reduced or dropped all together subject to the prosecutor’s personal whims or political agenda and thus bypassing juries.


This is not justice. It is a sham.


A digital justice system run by machine learning and inference could also bypass juries but administer justice with far less human error and bias. It could reduce the presence of corrupt prosecutors in the system, and all prosecutors, judges and juries in general saving billions if not trillions of dollars annually.


On March 19, 2019, Wired.com ran a story on Estonia, a world leader in digital government, and their ambitious “robot judge” project. The AI judge would adjudicate small claims disputes less than $8,000 to clear the current backlog of cases for clerks and judges.

Recently launched, the AI project will start with a pilot program centered on contract disputes. The parties will upload documents and anything else relevant and the AI judge will issue a decision that could be appealed to a human judge. The system will undergo adjustment based on feedback from lawyers and judges.


Other countries have used AI for sentencing but not so much for judicial decisions.

A coordinated AI effort across the federal government in the USA has gone slowly because federal databases in each agency are different and are not easily shared.


Opinion: So make the databases shareable and in a format conducive to AI usage.


Another reason for resistance or roadblocks in the USA for Digital Justice and Administration is that there is no national ID system and Americans fear “Big Government” and their may be Constitution-based challenges to fully automated decision making by a government agency.


Opinion: Implement a national ID system need for voting anyway. We have passports and state driver’s licenses, what’s the big difference? I don’t think Americans would fear Digital Government as any bigger than the “Big Government” that already violates many of their privacy rights and freedoms. If it was ruled to be unconstitutional to have automated judicial system, even with decisions that could be appealed, we would need a Constitutional amendment. Don’t forget we can reverse-engineer appropriate bill and amendments using AI to predict Supreme Court pass or fail decisions.


ps: In the Jussie Smolllet case an AI judge and sentencing algorithm would likely have produced a guilty decision based on the compelling evidence and a reasonable, unbiased sentence would have been administered rather than the corrupt dismissal of charges that included breaking of laws by the Illinois prosecution department.


📷

📷

📷


https://amzn.to/2KELgWm


Deep State PURGE and FIX

Applying AI, Digital Govt, Block Chain, Deep Learning and Civic Tech to Fight the Deep State Agenda

http://bit.ly/2VvFRaH



LIES: Jussie Smollett and the Deep State Agenda

Amazon, look inside: https://amzn.to/2INUwHG

Press http://bit.ly/2ITwmvm



39 views
best seller summary blue series.jpg

SCOTT CAMPBELL: AUTHOR/BLOGGER  REPUBLICAN  TRUMP BOOKS OBAMA BOOKS ANTI-COMMUNISM BOOKS  Los Angeles, CA

This site was designed with the
.com
website builder. Create your website today.
Start Now