“Many groups are concerned about the 2020 Presidential election and potential fraud or interference,” Jim Feldkamp says. “So far, we haven’t uncovered evidence of widespread voter fraud. Of course, future investigations may uncover issues. Obviously, in the long run, fair elections are a necessary for […]
Tag: Jim Feldkamp
Jim Feldkamp Discusses the Recent Congressional Stimulus Bill President Donald Trump signed the most recent stimulus bill and budget into law. While the bill left a lot of people unhappy, it keeps the government open and fully functional for now. Jim Feldkamp, who specializes in […]
A consortium of tech companies and governments is taking major steps towards blocking terrorist propaganda online. On the surface, the campaign sounds simple and familiar. James Feldkamp weighs in on recent updates announced at the Global Internet Forum to Counterterrorism (GIFCT) hosted by Google last month.
Apart from Google-owned companies like YouTube, the GIFCT’s members include Facebook, Microsoft, Twitter, among many smaller firms. The founding members say they have, “a valuable role to play in supporting other tech companies” who may not have the same resources to address the spread of harmful material on their platform.
The consortium seems to be taking a two-pronged approach to preventing the spread of harmful propaganda material. The first is the use of a simple database of web addresses that Twitter has already developed and blocked on its own platform. A similar approach was taken in the late 1990’s to remove explicit content from search engine results. The second is more targeted to specific content and will make use of ‘hashes’ (a digital fingerprint) to remove of block specific content. These will be shared to all companies in the consortium.
As with explicit content, detractors are concerned of the possibility of creating false positives. For example, parody websites like TheOnion.com sometimes produce content about terrorism that may seem polemic (to put it mildly) on the surface.
With the problem of explicit content in search results, Google eventually changed its algorithm to prevent offending content from appearing in search results without more explicit search queues.
James Feldkamp poses the question, should the same thing be done with radical content? Should users be given the freedom to receive unadulterated terrorism-oriented content if they ask for it explicitly? In the UK, there is already a “Three Strikes” bill that will penalize anyone caught watching “gruesome or inflammatory propaganda.”