infoSource

infoSource is a cybersecurity newsletter. By subscribing to infoSource you will remain up-to-date on the latest in communication, computer and software cybersecurity issues.

Didn't I See You Before?

Don't Get Faked Out. Part 1

A deepfake takes a combination of machine learning ("deep learning") and artificial intelligence to create synthetic media. Over the past few years the quality and ease of creation of deepfake video content has accelerated.

Skepticism, as a point of view, isn't a bad thing.

The InfoBro
Image: Facebook

Deepfakes are getting better and easier. Consequently the decision to believe, and the process of making decisions regarding what we observe in a video on the internet has also had to evolve. While the complexity of creating deepfakes has not changed the technology to enable the creation of a high quality video has been greatly simplified.

Imagine this: You receive a video call to your smart phone.  It's your significant other who asks you to purchase 50 $10.00 gift cards for the community social gathering raffle.  You just need to drop the cards off at the community center front desk and they will be gathered after the meeting.  What dawns on you, but not until you're driving home after you drop the cards at the desk, is that the last  interaction you had with your significant other was before they went grocery shoppingYou just got phished by a deepfake

This article, in two parts, will discuss how novel advances in technology combined with advances in computer science using machine learning and artificial intelligence will affect the processes used to identify synthetic media.

Moore's Law 

Understanding computers and computer science is important in today's society, it's practically fundamental. A computer engineer, co-founder of Inter, Gordon Moore postulated in an article in a 1965 Electronics magazine article that cost and performance, due to transistor technology gains, would double every year. As transistors get smaller they get faster and more power efficient. Later Moore revised that prediction but it has served the industry well in driving the production of cheaper and more powerful computer chips. More transistors on a chip means a more powerful chip. Those chips are used to facilitate computation. Computation, is different from the computer operations performed by transistors. It takes many computer operations to perform a computation.

Making Things Smaller 

A person might ask just how many transistors can you get on a chip and why does that even matter? In a nutshell a smaller transistor is better than a bigger transistor because it takes less power to make it work and less real estate (chip space) per unit. But, you can only make it so small. It can't be smaller than the atoms used to make it. To that end there is a practical end, a smallest point, for the size of a transistor, although there hope. Transistor operations are fast, they operate at a significant fraction of the speed of light (the fastest speed we know). There is an upper limit on how fast a transistor can perform an operation. Since this is true then what can be done to make computers faster as those limits are approached? The top speed of computer operations isn't the limit, the limit is computational complexity. Complexity is a direct result of the algorithm being processed.

"Good" Algorithms Are Hard To Describe

Image: SAS

An algorithm is a process, generally. The steps of an algorithm need inputs. The process involving the steps produces some output. An example of an algorithm, in it's simplest form, would be "Take an input A, output  A + 1."  An algorithm can have multiple inputs, outputs and lots of steps. As you can imagine there is more than one approach in many cases to get the same result (output). As a result of advancements in computing technology, as previously discussed, higher level algorithms benefit from a new form of input formulation through the application of machine learning algorithms.


The Bottom Line

This isn't the end of  this discussion. Another overarching discussion involves a higher level of information processing through the use of artificial intelligence. Artificial intelligence is any device that "understands" it's environment and applies its algorithm to interact successfully. This may be informed by the sub-field of algorithms chosen for machine learning. The information gathered through machine learning and it's application for consumption by artificial intelligence to produce knowledge representation is the basis for the second part of this article and its prospective conclusion. But, who knows, maybe our computer overlords will stop the next posting of how machine learning can inform artificial intelligence on better ways to synthesize a visualization. I think not. Let's pick that up later.

Stay safe. Be skeptical. Subscribe

×
Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

But That's Why I Have A Firewall, Right?
Can we stop saying Zero-day?

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Friday, 22 September 2023

Contact Me

Contact

Address
1309 S Street S.E., Washington, DC, 20020
Phone
00 1 202-276-8641
Mail
eric.d.williams@infobro.com
Web
https://www.infobro.com

Send Me a Message

Contact Me