Part One: Introducing the Problem & the Opportunity
By Jay Klein – CTO Voyager Labs
For quite a while, I wanted to write a series of posts regarding the practical aspects of AI implementation. As this is the first part and first impression counts, I was wondering how should I begin.
Some of you, may regard this first part as an introduction, similar to a book’s prolog or preface section – the very same part that some readers will often choose to skip over to the part where the real exciting storytelling starts. Nevertheless, if at least some of you are still here and are eager to begin, then I should consider my verses and rhymes thoughtfully while I’m trying to match your particular expectations regarding the content and even the style.
Speaking about first impressions and although you may interpret this as an unfair advantage – comprehending something about you in advance as your reading style can give me some clues. As a first take, I could try to average out the most appealing style matching the mainstream reader group characteristics of these types of blogs. It won’t come as a surprise that the individual preferences of each reader may degrade my effectiveness and therefore it is likely that I will need to re-think my strategy and upgrade my tactics, forced to create customized and personalized versions for each and every one of you.
If I continue to consider such ‘innovative’ publishing practices, some inspirations will follow – Why should I be limited to the introduction part alone? If one can sense and perceive the reader’s preferred style and as a result – dynamically adjust the content and publish or distribute any manuscript, then the next obvious thought seems unavoidable – possibly the whole content itself could be somehow generated automatically. This could be done from ground-zero or by using some content ‘building blocks’ pre-written for a certain subject. Note that if this is the case then my duties and responsibilities will be changed, reduced and reshaped as a result. Unemployment rates of blog writers on the rise? That’s another point for another blog.
To AI or Not to AI
But let’s pause here for a moment as at this stage, as you are probably asking yourself something like
– ‘This blog is about AI… right?’
Rest assured – It is.
The ‘Another Introduction’ discussion may seem like a deviation but it is just another ‘Artificial Intelligence‘ topic debate in disguise which can be frequently encountered when organizations are initiating new so-called ‘AI projects’. Since I do think that without any aid from my side, most of you already have at least some idea, either accurate or vague, either based on science or science-fiction, what AI could be, then at this stage, I would like to focus on what seems to be neglected by many – the debates conversing applicability aspects of AI technology, similar to the discussion we just mentioned. These ‘talks’ are the heart of any practical implementation consideration and surprisingly (or not) seem to have a distinctive structure. Acknowledging the characteristics of these debates will make AI technologies efficiently accessible as they are applied for the right business problems organizations are facing.
The first phase of these debates surrounds the problem discovery aspects and are simply trying to define and identify the difficulties to be handled. Surprisingly this task can be trickier than it seems. ‘Scientifically’ – this could be a somewhat technical process-oriented exertion either easily noticeable or as an underlying issue which will be eventually exposed as we delve into the particulars of it. For example, in the previous ‘introduction’ discussion it was observed that many readers skip the intro parts and in parallel acknowledged the difficulty of authors to achieve impactful impressions per individual reader.
Mission (Almost or Always) Impossible
To kick into the discussion some AI explicits, we need to pay attention to critical precursor events which have led to the debate itself and to the particular observations to be aware of when these AI related debates initiated.
For starters, we should be cautious from adopting the modernized version of René Descartes’s famous proposition ‘Cogito ergo sum’, which evolved to something like
– ‘I think’ (I have data)
– ‘therefore, I am’ (sure that AI will reveal beneficial insights)
The danger with this statement is that although it may be true and every data scientist will swear that they have concrete evidence supporting this claim, it also disconnects us from the actual problems organizations which are implementing AI technologies should be facing with. This addictive dogma that something maybe is hiding in the organization’s precious data is causing many of them to blindly invest in AI platforms while ‘forgetting’ what are the core concerns of their business which can benefit from AI technology adoption. We need to acknowledge that AI is not just another technology solution searching for a yet-to-be-found problem and its main power is derived from the creativity of weaving it within current technology implementations.
Let me elaborate a bit on this point.
Gartner discusses this AI feasibility transition and defines this as a progression from an ‘Always Impossible‘ situation to an ‘Amazing Innovation’ state. As in everyday experiences, there is ambiguity regarding the borderline between of what is possible and what is not, and this is why I prefer using a fuzzier version – Not ‘Always Impossible’ but ‘Almost Impossible’ instead.
Almost means that in most cases the starting point for applying AI technologies is not from scratch. The so-called problem may have existed for years and in some cases, it has been already solved to some degree. As we often are so amazed by the significant outcomes of applying AI, we tend to deemphasize or even completely dismiss any prior solution achievements or contributions to a known problem. This ‘amazement’ can be blinding.
Greenfield doesn’t mean a Dessert
Take for example an ‘invention’ as autonomous cars which seems like a pure AI related accomplishment. The truth is that AI was brought to the table after many years of various developments in the motor vehicle industry space. Did cars drive themselves autonomously prior to AI introduction? The simple answer may be no, but if we look back into the historical aspects of it then the answer will be a bit more complicated.
Cars have been computer-aided for the last 30 years. If you are a car owner then you should know that your car handles digital measurements of hundreds of variables (e.g., engine operation, steering condition, tire air pressure, etc.) and in response – uses some of the collected information to control various components including the safety elements digitally. The Anti-Lock Braking System (ABS) allows the wheels on a car to maintain tractive contact with the road preventing wheel lock-up and uncontrolled skidding. In reality, the way ABS operates, is when particular road and weather conditions occur, ABS will automatically initiate threshold and cadence braking maneuvers, imitating the behavior and action of an experienced driver (as with the case of a legacy braking systems) yet ABS does this at a much faster rate achieving much better control than a ‘normal’ driver could manage. It may appear from aside as a ‘machine’ is pushing aside the ‘human’ when danger is encountered. Of course, we acknowledge the fact that there is quite a gap between a singular safety behavioral responses to a fully autonomous driving experience but I think you get the point that intelligent devices were put into play prior to the AI era.
Take another example. Digital Image Processing.
At first, this research field was regarded as just extending singled dimensional digital signal processing techniques (e.g., voice, sound), to the two-dimensional realm – 2D filtering, 2D domain transformations, and 2D estimation – all were researched and developed. However, when faced with a clear task in mind, such as face recognition or what may seem simpler like ‘Can you see a gun in this photograph?’, the proposed solutions had some confidence level issues.
Initially the industry tended to blame the lack of computing power. It was well known even in the simpler realm of vocoders a few years before, that high-quality voice compression techniques were avoided for quite a while as computing power in general was limited. Only after signal processors became powerful enough (and cheap enough and battery friendly enough for mobile) only then, higher quality voice and sound processing and compression were exploited. However, in the image processing arena, especially with recognition and identification problems, even as additional computational power became available (faster machines, dedicated co-processors) and even when coupled with significant algorithmic advances which were made in the meantime (although still using more or less the similar toolsets of the same research domain), could only produce some limited overall improvements. Over time, some performance upper limit was met. Only when AI with its deep learning capability was introduced and then folded into the core solution components of these ‘vision’ related problems, the ‘glass ceiling’ was finally shattered and as of today, some AI based identification solutions outperform what humans are capable of.
Similar to the previous case of autonomous cars, there has been extensive prior work on the topic. ‘The problem’ was on the table for quite a while, either slowly building its ‘pieces’ and ‘layers’ of the final solution or even with attempts made to solve the entire problem itself. In both cases, AI provided the breakthroughs needed, but in both examples, although from a pure ‘AI’ perspective this was ‘greenfield’, in the reality there was already ‘something’ there which clearly defined the problem space.
Does this discount AI technology contribution?
Clearly the answer is no.
It just gives us a hint were some problems reside.
To Summarize
René Descartes’s had more than one proposition. He also said – ‘It is not enough to have a good mind; the main thing is to use it well’. The problem discovery phase is such an opportunity to do exactly so.
This is the time to revisit problems which were already tackled when applying non-AI methodologies while giving back some limited results and can now be re-encountered as some of the supporting ‘ingredients’ have been dealt with. This is the nature of surrounding problems that businesses have. Acknowledge this, and you can free yourself from the ‘hide-and-seek’ game searching for the concealed problems waiting for AI and focus on and examine how AI will provide the necessary breakthroughs for existing issues.
What are the other characteristics and phases of these AI related debates?
See you on the next part.