Decorative image frame


First-Hand Insights on ESG-Friendly AIoT(AI+IoT) InsurTech


New Site New Mission - TECHENGINES.AI is dedicated to offer ESG-Friendly AIoT Solutions

AIoT solutions can effectively help to improve companies ESG metrics and to obtain sustainable development

Environmental, Social, and Corporate Governance (ESG) is a collection of efforts undertaken by business world to consider sustainable and social impacts along with financial gains. The COVID-19 pandemic has highlighted the central role that businesses should play in creating more prosperous, safer world and a more sustainable relationship with our planet. AIoT (AI and IoT) technologies can help companies to act now to transform to a lower-carbon business model by operating cleaner tech more efficiently, so to deliver environmentally friendly and socially responsible prosperity.

Internt of Things (IoT) has been considered as a big opportunity to digitize many operations and bring tremendous benefits to our everyday lives. According to the World Economic Forum report IoT Guidelines For Sustainability, “The Internet of things (IoT) is undoubtedly one of the largest enablers for responsible digital transformation. It is estimated that industrial IoT alone can add $14 trillion of economic value to the global economy by 2030. The report also mentioned that IoT projects can help attain the 2030 Agenda for Sustainable Development, which includes the 17 Sustainable Development Goals (SDGs) set by the United Nations. Combined with Artificial Intelligence (AI), AIoT has the ability to help to achieve these goals that encompass clean water, water use, fighting climate change, energy efficiency and industry renovation, among others.


WUDAO 2.0 - Chinese BAAI lab built the world's largest AI Deep Learning model with 1.75 trillion parameters, challenging Google and Open AI

WUDAO: be enlightened by the truth of the universe through the biggest AI model in the world

Dao is a Chinese a philosophical concept signifying the “truth”. WUDAO means, in Chinese, to realize the truth of the universe. At Beijing Academy of Artificial Intelligence (BAAI)‘s annual academic conference today, the institution announced the launch of its latest version 2.0 of WUDAO, a pre-trained AI Deep Learning model that the lab claimed to be “the world’s largest ever”, with a 1.75 trillion parameters, 150 billion more parameters than Google’s Transformers and 10 times of those used to train GPT-3.

Another distinctive feature of WUDAO 2.0, it is a Multimodal model trained to tackle both Natural Language Processing and Computer Vision tasks, two dramatically different types of problems. During the live demostration, the AI Deep Learning model was capable of performing text generation tasks such as writing poems in traditional Chinese styles, writing articles, generating alt text for images; and at the same time, image generation tasks such as generating corresponding images from text descriptions. Collaborating with a Microsoft spun-off company in China, it became the brain to power XiaoBing, a “Virtual Assistant” like Apple’s Siri or Amazon’s Alexa.

The Chinese lab has tested WUDAO’s sub-models on industry standard datasets and claimed that they have achieved better performance than previous models, for example, beating OpenAI’s CLIP on ImageNet Detection SoTA Benchmark (Zero-Shot); beating OpenAI’s CLIP and Google’s ALIGN on image/text indexing of Microsoft COCO datasets; beating OpenAI’s DALL-E for image generation from text with WUDAO 2.0 sub-model Cogview.


Deep Learning AI, Money is all you need?

Very expensive hardware systems (i.e. Money) played a major role in recent advances in Deep Learning AI.

Nowadays, we’re used to the fact that many new papers are published every day on Deep Learning AI, a sector that is experiencing exciting novelties every day and sometimes it’s hard to keep track. On the free e-Print archive website arXiv, only for the subject “Computer Vision and Pattern Recognition”, there were on average 38 papers published each day in May. This time, one paper presented by Google on the 4th May, MLP-Mixer: An all-MLP Architecture for Vision, has made some serious noises, partially due to a very “google-style” title with “all-MLP”.

Against what you might have been thinking, MLPs (multi-layer perceptrons) were a popular machine learning solution in the 1980s. CNNs (Convolutional Neural Networks) were revolutionary for Deep Learning thanks to the introduction of Inductive Bias, an algorithm that tried to maximize the optimization of the resources needed and could take use of the limited data and computing power much more efficiently.

That’s why academic gurus like Yann LeCun didn’t appreciate very much industry tech giants’ advances achieved mainly by piling up the data and GPUs, in another word, money! LeCun retweeted above paper and challenged its claim of being “Convolutional Neural Network” free and successively challenged further the real value of such research.


Upgrade RPA to the next-level and redesign Automation-Driven systems with AIoT

RPA is an old concept; however, it should evolve with the times.

Robotic process automation (RPA) is a form of business process automation to define a set of instructions for a piece of software “bot” to perform. The term can be traced back to the 1990s. The initial RPA, let’s call it “RPA 1.0”, was designed to replace the time-consuming and mind-numbing repetitive human tasks, such as data entry or software testing. Helping business to move the data across different applications using functions such as desktop macros and screen-scrapping was the main objective of implementing RPA 1.0. It mimicked human-computer interactions. With relatively simple rule-based workflow, RPA 1.0 helped to automize standardized, repetitive but high-volume tasks. Most of the prevalent RPA platforms/tools yet stay at implementing RPA 1.0. In order to be able to use such RPA 1.0 Platform/tool, a business process needs to:

  • Be rule-based.
  • Be repeated at regular intervals or have a pre-defined trigger.
  • Have pre-defined inputs and outputs.
  • Have sufficient volume.

Implementing RPA 1.0 is considered a first step in the automation journey. RPA 1.0 certainly had many benefits and helped to improve business efficiency, but its value was limited.


AIoT - The crucial component that navigates the roadmap of Insurance towards Industry 4.0

The roadmap of Insurance towards Industry 4.0 is no different from the one of Manufacturing

The Industry 4.0 or Forth Industry Revolution is the ongoing Automation of traditional manufacturing and industrial practices, using modern smart technology. The term is originated in 2011 from a project in the high-tech strategy of the German government, which promotes the computerization of manufacturing. There are four principles identified as integral to Industry 4.0:

  • Interconnection
  • Information transparency
  • Technical assistance
  • Decentralized decisions

There are extensive technologies related to Industry 4.0, in which IIoT (Industrial Internet of Things) is an important component. However, as mentioned in our previous article, without combining the advanced AI technologies, most of the enormous amounts of data generated from IoT devices would be just wasted(unstored and unanalysed). Therefore, we believe that AIoT should be considered as a unique component when an industry designs its roadmap towards Industry 4.0.

Industry 4.0 is a revolution of actual organizations and value chains. It requires to reconstruct the whole ecosystem of the business to allow maximum flexibility, productivity, and profitability.

It’s obvious that all above concepts can be applied to almost every industry, including Insurance. The below I4.0 blueprint can be easily interpreted as Insurance 4.0.


Effective AIoT (AI + IoT) implementation requires Edge-Cloud Collaborative Computing

AIoT, the convergence of AI and IoT, requires to rethink the IT infrastructure architecture

IoT devices are generating more data than ever. According to IDC Forecast: by 2025 there will be 55.7 B connected devices worldwide and the data generated from connected IoT devices will be 73.1 ZB, growing from 18.3 ZB in 2019.

Unfortunately, whether we admit it or not, most IoT Data goes unstored and unanalysed. Therefore nowadays, when we talk about “data-driven” decisions, very likely, less than 1% of the data were actually analysed or used in the decision-making processes. Such inability for human intervention is precisely why AI and Machine Learning (ML) algorithms are required to be incorporated in order to form AIoT applications.

However, effective AIoT applications also require high responsiveness from highly decentralized IoT devices. It raises great pressure to the classic Cloud Computing paradigm on the network bandwidth and communication latency. Moreover, constantly sending large amounts of raw data back to a centralized cloud is unrealistic and costly.

That’s why Edge-Cloud Collaborative Computing came into the picture. Edge Computing is a distributed computing paradigm that brings computation tasks and data storage closer to the devices and data sources on the edge of the network, to improve response times and save bandwidth.


AI Game of Throne for Deep Learning - Tensorflow/Keras vs PyTorch

April Fool? Keras: Killed by Google

Keras is almost a household name in the Data Science community. On the 28th March, MIT CSAIL sent a tweet to celebrate Keras’s 6th birthday. However, only a few days later, on the 1st April, a post titled Keras: Killed by Google has sparked heated discussion on Reddit.

Keras is an open-source software library that provides a Python interface for artificial neural networks. Its primary author and maintainer is François Chollet, a Google engineer.

Keras supported already another neural network (NN) backend Theano, years before Google open sourced Tensorflow in Nov 2015. Then since Keras v1.1.0, Tensoreflow became the default NN backend. Keras’s Stable & Consistent & Simple & Framework Agnostic APIs are well appreciated by the Data Science community and helped TensorFlow to grab the throne in the Deep Learning framework world. In 2019, Tensorflow2.0 came up with a tight integration of Keras and an intuitive high-level API tf.keras, that was also the trigger of above Reddit post.

The No.1 position is never easy to maintain. In 2017 Facebook threw its hat in the ring and joined the fight with PyTorch, a more pythonic Deep Learning framework born with dynamic computation graphs. The competition among these two tech giants is so fiercely combative, on the 28th September 2017, Yoshua Bengio announced in an open letter that Theano would stop updating and maintaining.

Since then, the war among Deep Learning frameworks has officially been labeled as the war among Tensorflow/Keras, PyTorch and Others(less than 5%).


Apply AIoT - “exponential” AI and IoT technologies to re-imagine Health Insurance

Healthcare is an essential service to everyone. AIoT can help health insurance to deliver Personalized policies

Healthcare is an enormous ecosystem with boundless applications of AIoT technologies. The insurance industry is continuously boosting the IoT technology in multiple product lines in order to predict and prevent the more risk-prone areas. Smart sensors enable the collection and monitoring of human vital signs in real-time. These data reflect our physical and mental status. Such large volume of data harvested by the sensors is completely out of human interpretation and would have to be processed and analysed by AI algorithms to produce meaningful indications for us to adopt right measures of care so to improve our health conditions.

Nowadays, the AIoT technologies are increasingly integrated into the medical services, from hospitals, clinics, diagnostics facilities, health insurance, pharmaceutical sector, real-time health systems (RTHS), robotics, smart pills, other care-related establishments, and so on. Therapeutics, diagnostics and infrastructure will all be affected by the introduction of digital health powered or augmented by AIoT technologies.


None of the 62 so-called AI models for Covid-19 was of potential clinical use, according to the research led by the University of Cambridge

In 2020, Covid-19 ravaged the world. Machine learning and Artificial Intelligence (AI) were considered as promising and potentially powerful techniques for detection and prognosis of the disease. In order to assist doctors in screening potential patients more quickly and accurately, data scientists around the world have published thousands of studies with machine learning based models and claimed that these models can diagnose or prognosticate the coronavirus disease 2019 (COVID-19) from standard-of-care chest radiographs (CXR) and chest computed tomography (CT) images.

However, the analysis Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans reported in the Nature Machine Intelligence on the 15th March 2021 have revealed that none of them is suitable for detecting or diagnosing COVID-19 from standard medical imaging, due to the data constraints and methodological flaws.

The researchers followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA ) checklist and reviewed 2,212 studies with published papers and preprints, for the period from 1 January 2020 to 3 October 2020. Only 415 studies were included after initial screening, of which 62 were included in the review after quality screening.

PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions.


AIoT InsurTech helps to achieve the core value of Insurance - Protection against the risks

Protection has always been at the core of the insurance

Insurance is originated as a means of protection from financial loss, helping individuals, businesses and societies to mitigate the risks, to recover from the loss caused by the inherent uncertainty and to thrive and achieve financial prosperity.

Besides providing peace of mind to their clients, by leveraging InsurTech, insurance can enrich the financial protection and offer additional value-added services such as better pre-incident prevention and post-incident assistance.

It’s a common knowledge that the COVID-19 pandemic has been the catalyst needed to accelerate the digital transformation of the insurance industry. Among all the new technologies, five are considered to be the fundamental components to empower the new generation insurance: Cloud Computing, Big Data, the Internet of Things, Artificial Intelligence, and Blockchain.