Human-AI Symbiosis? All #people should decide how modern #humanity unfolds. Web article considers #it#software#industry developments from under a decade ago that may mold the #future#futurism#Intel#Tech#Mexican#Mexico#nlp#AI#humanityhttp://ricardolezama.com/ml/wearables-speech-recognition-how-intels-loss-could-be-tesla-gain/…
Despite the famously late arrival to mobile computing, Intel did make certain strides before many others in the space of wearables in Mid-2013 and onwards. Much of it may have to do with the company’s strategic diversification which took place in mid-2013.
Hundreds of Millions Poured Into Research & Development
Intel invested at the very least 100 million dollars alone into the capital expenditures and personnel for their now defunct ‘New Devices Group’, an experimental branch of Intel charged with creating speech and AI enabled devices.
While many high-profile people were hired, developments took place and acquisitions made, investors were either not aware or not too pleased with the slow roll to market for any of these expenditures.
These capital intensive moves into different technology spaces were possibly done as a proactive measure to not miss the ‘next big thing’ as they had with not providing the chipset for the Apple I-Phone. At the time, Brian Krzanich was newly appointed as Intel’s CEO to permit the company to transition from these failures – rightly or wrongly – attributed to the prior CEO, Paul S. Otellini.
Why Did Intel Invest In Wearables?
Once Krzanich became CEO of Intel in May 2013, he quickly moved to diversify Intel’s capabilities in non-chip related activities. Nonetheless, these efforts were still an attempts to amplify the relevance of the company’s chipsets. The company’s participation within the various places in which computing would become more ubiquitous: home automation, wearables and mobile devices with specialized, speech-enabled features. The logic was that the computing demands would naturally lead to an increased appetite for powerful chipsets.
This uncharacteristic foray into the realm of ‘cognitive computing’ led to several research groups, academics and smaller start-ups being organized under the banner of the ‘New Devices Group’ (NDG). Personally, I was employed in this organization and find that the expertise and technology from NDG may regain relevance in today’s business climate.
Elon Musk’s Tweet: Indicative Of New Trends?
For instance, Elon Musk recently tweeted a request for engineers experienced in wearable technologies to apply for his Neuralink company. On the surface, this may mean only researchers who have worked on Brain Machine Interfaces, but as Neuralink and competitors bore down on some of the core concepts surrounding wearables, subject matter experts in other fields may be required as well.
When we consider what Musk is discussion, it would be fair to ask what constitutes ‘Human’?
Without much pedantic overviews, I would assume that linguistics has somethin to do with describing humanity – specifically, the uniqueness of the human mind.
As corporate curiosity is better able to package more variant and sophisticated chunks of the human experience, those experiences yielded primarily through text and speech are best described by Computational Linguistics and already fairly well understood from a consumer product perspective. It’s fair to say that finding the points of contact between neurons (literal ones, not the metaphors from Machine Learning) firing under some mental state and some UI is the appreciable high-level goal for any venture into ‘Human-AI’ symbiosis.
Thorough descriptions of illocutionary meaning, temporal chain of events, negation and various linguistic cues both in text and speech could have a consistent neural representations that are captured routinely in brain imaging studies. Unclear, however, is how these semantic properties of language would surface in electrodes meant for consumer applications.
Radical Thinkers Needed
The need for either linking existing technology or expanding available products so that they exploit these very intrusive wearables (a separate moral point to consider) likely calls for lots of people to be employed in this exploratory phase. Since it’s exploratory, the best individuals may not be the usual checklist based academics or industry researchers found in these corners. If the Pfizer-BioNTech development is any indication, sometimes it’s the researchers who are not standard that are most innovative.
Ricardo Lezama — Image AI is an excellent, easy-to-use, Machine Learning wrapper that allows a python script to identify the dominant concept to describe an image. While this article covers a tiny usecase, I would recommend a user be aware of the need to install the right C++ dependencies.
Facebook Image AI
The developers are a group from a Facebook-backed outfit based in Nigeria. One of the principal developers is Moses Olafenwa, a founder of DeepQuest AI. Aside from this excellent python library, Olafenwa’s group develops AI servers for business applications.
Code Summary: ImageAI Predictions
In this summary, we will review the code examples here: https://github.com/OlafenwaMoses/ImageAI/tree/master/imageai/Prediction
Model Dependencies: ResNet
Aside from the libraries called through import statements, the more important dependencies for our test script using ImageAI’s python moduleare the different models that one can use to run a particular image against the model. In this particular example, we reference the RESNET model trained on ImageNet-1000 images. There is an annual competition in which various neural net models are compared against one another using the ImageNet libraries as a frame of reference.
ResNet is a model that uses ‘residual learning’ to create deeper learning.
According to the authors, ResNet “explicitly reformulate[s] the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.”
Easy Ways To Interface
Instead of modifying the hardcoded line referencing an image, we modified the sample script to accept a simple command line argument. The script (posted originally here) has been modified slightly. I added a reference to the built-in sys library to pass on a command line argument.
Name the file “predicition.py”, then run the script (copy/paste) from wherever your image file is local. Also, the model is resnet50_weights_tf_dim_ordering_tf_kernels.h5, and is a Microsoft sponsored model developed by Kaiming He et al.
from imageai.Prediction import ImagePrediction import sys import os prediction = ImagePrediction() prediction.setModelTypeAsResNet() prediction.setModelPath("resnet50_weights_tf_dim_ordering_tf_kernels.h5") prediction.loadModel() predictions, percentage_probabilities = prediction.predictImage(sys.argv, result_count=5) for index in range(len(predictions)): print(predictions[index] , " : " , percentage_probabilities[index])