Technology Archives - Polaris website https://polarisitgroup.com/category/technology/ Polaris IT Group SA Thu, 06 Jul 2023 07:50:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 Polaris IT Group SA Capital Group makes over PLN 10 million in revenues and PLN 1.8 million in net profit in Q1 2023 https://polarisitgroup.com/2023/05/17/polaris-it-group-sa-capital-group-makes-over-pln-10-million-in-revenues-and-pln-1-8-million-in-net-profit-in-q1-2023/ Wed, 17 May 2023 11:16:12 +0000 https://polarisitgroup.com/?p=6277 As announced by Polaris IT Group SA, an IT company listed on the Warsaw NewConnect stock market, the consolidated sales revenues of the Polaris IT Group SA Capital Group at the end of the first quarter of this year amounted to PLN 10,104,858.73, while the consolidated net profit amounted to PLN 1,811,672,74 which, as emphasized […]

The post Polaris IT Group SA Capital Group makes over PLN 10 million in revenues and PLN 1.8 million in net profit in Q1 2023 appeared first on Polaris website.

]]>
As announced by Polaris IT Group SA, an IT company listed on the Warsaw NewConnect stock market, the consolidated sales revenues of the Polaris IT Group SA Capital Group at the end of the first quarter of this year amounted to PLN 10,104,858.73, while the consolidated net profit amounted to PLN 1,811,672,74 which, as emphasized in the initial part of the result commentary, allows to look at the next quarters of this year with optimism, taking into account still difficult market environment.

„In the first quarter of 2023, iSRV Zrt. [Hungarian subsidiary, included in the structures of the Polaris IT Group SA Capital Group] achieved a pre-tax profit of HUF 174 million and revenues of HUF 823 million. Of these revenues, HUF 265 million came from license trading, but a significant amount also came from the sale of the EDL system (online education system) and the solution built on it, which accounted for another HUF 173 million in revenues. In addition, the company recorded HUF 195 million from individual implementations and sales of FPGA licenses and HUF 37 million from system operation. In addition, the sale of the Enclosed Server license developed by iSRV generated revenues of HUF 125 million” – the business activity in the first quarter of this year of the key operating company for Polaris IT Group SA Capital Group was underlined.

„The company has performed a cost effectiveness review, therefore we expect savings in operating costs from the second quarter. In the field of research and development, the company focused mainly on bug fixes, other than that, no new development expenses were incurred” – we read on.

„In 2023, the economic situation in Hungary remains extremely difficult, which affects the company both directly and indirectly. iSRV has two existing consortium contracts with a total framework of HUF 450 billion (with T-Systems and 4iG companies), for which we have high hopes. However, we already see clearly that they can only start when Hungary and the EU reach an agreement and Hungary receives the funds on which these projects are based” – current market conditions were presented.

„As we emphasized in the annual report for 2022, we are constantly looking for areas that give the company a chance for profitable operation even in the current situation. In the near future, we will focus on three such areas: software development in the energy sector, provision of telemedicine services based on the collection of telemetric data and AI in the healthcare sector as well as the sale of related devices and the implementation of optimization algorithms and AI-based solutions in the logistics sector, aimed at shortening and optimizing the time of transporting industrial products by rail. We are actively preparing projects in all three areas, and we are convinced that they will soon bring further successes for iSRV Zrt and thus also for Polaris IT Group SA Capital Group” – the current operational goals and business perspectives of the Polaris IT Group SA Capital Group were summarized.

 

About Polaris IT Group SA:

Polaris IT Group SA operates as an IT service provider and consulting company in the field of information technology.

In July 2020 the Polaris IT Group SA Capital Group was established when Polaris IT Group SA acquired 100% of shares in the share capital of IAI (Industrial Artificial Intelligence Kft.), which holds 100% of shares in another Hungarian entity ISRV Zrt. At present, IAI does not conduct activities generating sales revenue, while ISRV is the company conducting operating activities on the largest scale within the Polaris IT Group SA Capital Group.

The main business lines of the Polaris IT Group SA Capital Group are artificial intelligence and the development of computer hardware and software.

Polaris IT Group SA is a company listed on the Warsaw NewConnect stock market operated by the Warsaw Stock Exchange as part of the Alternative Trading System.

The post Polaris IT Group SA Capital Group makes over PLN 10 million in revenues and PLN 1.8 million in net profit in Q1 2023 appeared first on Polaris website.

]]>
Polaris IT Group focuses on the development of hardware and software based on AI https://polarisitgroup.com/2023/04/21/polaris-it-group-focuses-on-the-development-of-hardware-and-software-based-on-ai/ Fri, 21 Apr 2023 15:49:55 +0000 https://polarisitgroup.com/?p=6250 Business profile Polaris IT Group SA is a representative of the IT sector and a public company listed on the NewConnect alternative market (ticker: PIT), run by the Warsaw Stock Exchange. The company is an IT service provider and consulting company in the field of information technology. It offers individually designed and innovative solutions in […]

The post Polaris IT Group focuses on the development of hardware and software based on AI appeared first on Polaris website.

]]>
Business profile

Polaris IT Group SA is a representative of the IT sector and a public company listed on the NewConnect alternative market (ticker: PIT), run by the Warsaw Stock Exchange.

The company is an IT service provider and consulting company in the field of information technology. It offers individually designed and innovative solutions in the field of security technology, artificial intelligence, biometric identification and image recognition and analysis, streaming and online education, as well as healthcare.

In July 2020, the Polaris IT Group SA Capital Group was established when Polaris IT Group SA acquired 100% of shares in the share capital of IAI (Industrial Artificial Intelligence Kft.), which holds 100% of shares in another Hungarian entity ISRV Zrt. At present, IAI does not conduct activities generating any sales revenue, while ISRV is the company conducting operating activities on the largest scale within the Polaris IT Group SA Capital Group.

The main business lines of the Polaris IT Group SA Capital Group are artificial intelligence and the development of computer hardware and software.

The majority shareholder of Polaris IT Group SA is the British company Bit Pyrite Ltd, controlled by the President of the Management Board, Gábor Kósa, with its registered office in London, holding a majority stake of 66.24% of shares, while currently 33, 76% of the company’s shares remains  as a free float.

IT services and technology market

It should be mentioned that the Polaris IT Group SA Capital Group has many years of business contacts in China and East Asia, where it purchases personalized notebooks, tablets, servers and sensor devices – most often directly from manufacturers. At the same time, the premium partner status of Asian entities also allows to provide own customers with personalized and individually configured IT devices, equipped with pre-installed software and priority manufacturer support.

The above-mentioned purchasing opportunities and trade relations, together with own development possibilities, should lead in the future to bigger potential of the Polaris IT Group SA Capital Group to generate revenues, thus meeting the visible expectations and market demand in ​​IT services and technologies.

Financial results and valuation ratios

The Polaris IT Group SA Capital Group ended 2022 with revenues of nearly PLN 45 million and a net profit of approximately PLN 3 million. Only at the unit level Polaris IT Group SA generated sales revenues of approximately PLN 7 million at the end of last year, compared to previous almost PLN 4 million, which means an increase of about 75% year on year.

As the Management Board of Polaris IT Group SA emphasizes in the last periodic report Polaris IT Group SA is constantly working on preparing large projects that started after 2022 and which, as the company hopes, will generate a significant increase in consolidated financial results already this year and such a situation will be affected by, for example, contracts with the public sector, which the hungarian subsidiary iSRV Zrt. is applying for as part of its participation in consortia of companies.

The current stock market capitalization of Polaris IT Group SA is slightly over PLN 47.5 million with relatively low and attractive, in relation to other entities from the IT industry, valuation ratios P/BV (0.80) and P/E (16.00).

Operational and strategic goals

At the operational level the Polaris IT Group SA Capital Group is focused on continuing its current activities in the area of ​​software systems development, development of encryption solutions and hardware, development of research on artificial intelligence, trade in hardware and software, trade in hardware and software related to the healthcare market and the provision of services related to with this hardware and software, as well as developing own online learning solutions.

At the same time, the Polaris IT Group SA Capital Group is also actively looking towards new business areas, where it sees the opportunity to use its own skills in software development using technologies based on blockchain and AI solutions. One of them is logistics, where it intends to develop IT and logistics algorithms for freight transport. The second is the energy sector, where it plans to contribute to the implementation of effective control of high-performance energy storage devices – on industrial scale.

The strategic goal of Polaris IT Group SA remains to satisfy every unique need in the IT area using innovative solutions based on artificial intelligence and comprehensive development of hardware and software. To reach this goal the company is constantly expanding its activities with projects based on its own products and services, manufactured by the hungarian subsidiary company iSRV Zrt.

In the longer term one of the key strategic visions is the expansion of the Polaris IT Group SA Capital Group by further entities from the industry that will complement its business profile and strengthen its market position, especially based on specialist competences and knowledge or new business development opportunities.

The posted content is for informational and educational purposes only and is always an expression of the personal views of its author. They do not constitute, either in whole or in part, “recommendations” within the meaning of the polish Regulation of the Minister of Finance of October 19, 2005, on information constituting recommendations regarding financial instruments or their issuers (Journal of Laws of 2005 No. 206, item 1715). The author is not responsible for any investment decisions made based on the published content.

The post Polaris IT Group focuses on the development of hardware and software based on AI appeared first on Polaris website.

]]>
Elderly care 2.0: empowering services by data science https://polarisitgroup.com/2023/04/03/elderly-care-2-0-empowering-services-by-data-science/ Mon, 03 Apr 2023 09:43:27 +0000 https://polarisitgroup.com/?p=6229 In the previous posts, we talked about why elderly care becomes a top priority in every society and how AI and data science can be leveraged to help create solutions to many challenges. To provide personalized care for individuals, remote monitoring is necessary, but naive use of such solutions can cause more harm than good. […]

The post Elderly care 2.0: empowering services by data science appeared first on Polaris website.

]]>
In the previous posts, we talked about why elderly care becomes a top priority in every society and how AI and data science can be leveraged to help create solutions to many challenges. To provide personalized care for individuals, remote monitoring is necessary, but naive use of such solutions can cause more harm than good. Federated Learning is one way to address this challenge. This post is about how monitoring can be used together with other information sources for better support at the individual and community levels. 

To start with, let us admit we are nerds: we all love stats (hope, you do, too!). And in health care, there are tons of stats! It is great, but statistics are useful only when the context is given. Context is defined in space and time, so we need measurements across regions that can periodically update our knowledge (models) of whatever we care for.  It was not so long ago that everyone was watching COVID-19 epidemics infographics and statistics that gave us minute information across the globe (like https://www.worldometers.info/coronavirus/  or https://ourworldindata.org/coronavirus or https://coronavirus.jhu.edu/map.html). It became obvious to everyone that proper use of modern digital tools (hardware, communication, software) can be vital. In emergency situations, we throw in everything we have to get answers and make quick decisions. With dense observations and the right modeling it is possible to see who is at greater risk, where should medical aid be deployed first, how to reorganize logistics, etc. 

Thankfully, emergencies are not permanent. But we did learn a lot about efficient information gathering and modeling at a social scale! And this fresh knowledge is what is needed when we think about the support of large communities: be it kids in remote villages, elderly people living at home, or the general population in large areas without proper infrastructure. These scenarios are quite different, yet similar tools and services can support them.  In turn, what we learn in one field, may become handy in another one! 

Here is a very interesting report on how digital health tools could empower the existing health systems in different regions of Africa [1]. While regions vary regarding infrastructure, level of financial support, or the extension of the health care systems, they all have to face some common challenges like vast areas without safe transportation options and cultural and language differences.  According to the McKinsey report “digital health tools are technology-enabled products and services for patients, healthcare workers, communities, pharma, and biotech companies, public-health leaders, regulators, and payers.” They divided the various data-driven and AI solutions into 6 categories:

  • virtual interactions: remote consultations, emergency handling, mental support
  • paperless data: health information exchange, cloud-based prescribing
  • patient self-care: services that require the active participation of the clients
  • patient self-service; appointments, etc
  • decision intelligence systems: statistical modeling, decision support, etc
  • workflow automation: logistics and resource optimization, device management 

According to McKinsey’s estimates, efficiency gain or savings by introducing various digital health tools could be somewhere between 2 billion and 11 billion dollars in South Africa only by 2030  (6 to 15 percent of total projected healthcare spending)! The individual contribution of the most relevant tools and services are shown in the next table: 

Digital health tools

Conservative scenario

Optimistic scenario

Share of total savings from digital adoption in South Africa, 2030, % [1]

Clearly, quite a few tools and services are a crucial part of the healthcare system or related to activities in medical centers (like genetic testing, performance dashboards or hospital logistics). But many are (could be) relevant to the elderly care, too! While the infrastructure (public transportation, road systems, communication networks, etc) is significantly better in Poland or Hungary, in reality,  access to these infrastructures is quite limited to the elderly. So there is a very strong similarity between the two scenarios. Perhaps it seems like a big leap in logic, but we can assume the relative contribution of the listed services and tools would be roughly similar when properly used in elderly care. Now let us dig a bit deeper and pick the ones with the largest potential.  

Virtual interactions

Well, it is no surprise that virtual interactions between patients and caregivers can significantly increase efficiency, especially when resources are scarce or when mobility is limited. That is why we have a deep interest in large-scale monitoring and virtual assisting solutions (see our previous posts!). Even under the more conservative scenario, these services would give about 39% of the overall efficiency gain. 

Information exchange (“Paperless data”)

What is surprising, though, is that the least complicated of all the listed tools may have the largest impact! Basically, it is about having fully digitized electronic health (or status) records that can be searched, compared against, or integrated with other information sources. Again, under the more conservative scenario, this group of services would yield almost 24% of the overall efficiency gain. 

Considering the fact that creating and maintaining large datasets are way cheaper than any of the other methods in the table, this contribution is huge!  It is actually so huge that everyone should start jumping now! Well, maybe not. While technically, integration of various data sources can be done, there are many obstacles that make this task a real challenge. 

Probably the single most important issue is of legal nature. Patient records, monitoring data or any other personal information collected fall under a “special category” as defined by the European Union’s General Data Protection Regulation (GDPR) or “sensitive data” as defined by the Chinese Personal Information Protection Law (PIPL). In the USA California’s Consumer Privacy Act (CCPA) has become the national standard. On the surface these laws are similar, but there are significant differences that make data-driven healthcare and elderly care complicated. Of the many differences let us just highlight two outstanding features. Compared to PIPL[3], GDPR gives a very detailed list of data types and use cases regarding “special category”, but its scope is narrower. Compared to CCPA [2], GDPR requires explicit consent, which can hinder the entire information-gathering process impossible in some cases. 

Clearly, some monitoring data cannot be anonymized as we need to know exactly what the patient is in need of and where she needs help. Differential privacy, however, makes it possible to separate general information that can be safely anonymized and is not considered sensitive [4] anymore. 

Beyond Paperless data

Of course, all these numbers in the quoted study are just rough estimates as there are so many unknowns. But they do indicate how important is to invest in the future and start the transformations. And clearly, once these tools are up and running their joint impact becomes ever larger. 

But why stop here? The listed services and tools are directly related to healthcare data or services. Once a secure link to the digital world is established, several new opportunities will arise by incorporating additional information sources. 

Here we list some ideas on how additional information sources could be used, but we believe that the possibilities are endless. 

  • Dynamic comprehensive geriatric assessment (CGA)
  • Weather forecast and pollen maps
  • Digital elevation maps (DEM)
  • Contact-distance maps

Dynamic comprehensive geriatric assessment (CGA)

According to [10] CGA “ is defined as a multidisciplinary diagnostic and treatment process that identifies medical, psychosocial, and functional limitations of a frail older person in order to develop a coordinated plan to maximize overall health with aging”.

To put it simply it is about understanding the individual’s needs and limitations and defining the best strategy for maintaining her well-being [5,6]. However, needs and limitations can change quickly due to the deterioration of the health status or some unexpected changes in the surrounding infrastructure. What if the nearest grocery has gone bankrupt? What if the next bus line is closed due to construction work? This kind of information is dynamic by nature and its use and integration with health status data are not straightforward. Yet it would make elderly care more proactive and efficient! 

Naturally, social care systems have resource limitations so they must consider the needs of all the persons who need support. Similar to primary health care accessibility analysis (like [8] ), geographical modeling of the accessibility of care centers or other services by the elderly community could also be extremely informative. Maps like the following could be dynamically created and updated for each type of service (health care, pharmacies, community centers, parks, etc):

Access by public transport vs walk time difference for two large cities in Finnland. Source [8] 

By periodically updating these accessibility maps, we could spot regions that suddenly become less accessible indicating that proper measures must be taken. 

Weather forecast and pollen maps

In addition to changes in the infrastructure, natural changes, like weather can also cause severe problems for many. Extreme temperatures unfortunately become more frequent and there are several early warning systems in place. But abrupt changes like strong wind or rain fall are hard to prevent. Their impact, however, depend on the density of older persons in a given region and so proper responses can be modeled and defined in advance. 

Talking about wind. Air pollution (smog and pollens) hit elderly people with deteriorated health condition stronger. Services like https://www.breezometer.com/air-quality-map/ give detailed information about air quality for more than 60 countries. Many countries have public service showing the current situation, like the national pollen map in the US: https://www.pollen.com/map

Wind forecast and pollen maps combined could also be used to inform people about what to expect and how to minimize risk. What is needed here? Health information exchange system, monitoring data about current and expected location of the persons and dynamic weather maps. 

Digital elevation maps (DEM)

When monitoring at scale, we can quickly collect statistics that give us an overall view on the current status of the community. However, geographic information can add a lot to our understanding. For example, by monitoring walking speed and time spent we may spot someone who moves significantly less than the average. But what if that person lives in a mountain village where moving requires more amount of energy? This information can be taken into account by using elevation maps. [11]

Contact-distance maps

We already talked about how elderly care will become our common responsibility. Families together with community resources can give the right support. Whenever it is possible, social workers collect information about primary contacts. Having family members around can make services much more efficient and provide better emotional support. But it is also important to know how far the primary contacts live. Contact-distance maps could be of help when deciding about asking for help vs providing social care support in case of an emergency or, for instance, a planned visit to the doctor. 

We believe that this list will grow in the future. What is needed is your imagination and our technical expertise to make all these possible.

References

[1] https://www.mckinsey.com/industries/healthcare/our-insights/how-digital-tools-could-boost-efficiency-in-african-health-systems?stcr=4B30665F36D948B8A3BDCB408E61012B&cid=other-eml-alt-mip-mck&hlkid=7215bd7fe4644bb992acfb6c78fee308&hctky=12152474&hdpid=aee9a458-cc27-4df7-8598-208978eee1cd#/ 

[2] https://www.cookieyes.com/blog/ccpa-vs-gdpr/ 

[3] https://www.china-briefing.com/news/pipl-vs-gdpr-key-differences-and-implications-for-compliance-in-china/ 

[4] https://www.gdprsummary.com/anonymization-and-gdpr/ 

[5] https://www.communityservices.act.gov.au/domestic-and-family-violence-support/what-is-act-government-doing/dfv-risk-assessment/key-components/risk-assessment 

[6] https://core.ac.uk/download/pdf/236433397.pdf

[7] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6068710/ 

[8] https://www.sciencedirect.com/science/article/pii/S0143622821001995 

[9] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6852049/ 

[10] https://www.uptodate.com/contents/comprehensive-geriatric-assessment 

[11] https://omdena.com/blog/topographical-maps-deep-learning/

The post Elderly care 2.0: empowering services by data science appeared first on Polaris website.

]]>
Do not let Big Bro in! – Security and privacy in elderly care https://polarisitgroup.com/2023/03/10/do-not-let-big-bro-in-security-and-privacy-in-elderly-care/ Fri, 10 Mar 2023 11:49:08 +0000 https://polarisitgroup.com/?p=5921 In our previous post we talked about how elderly care is becoming one of the most fundamental challenges across the world. It is clear that we will need all the tricks IT can offer, be it cloud computing, edge devices or AI. However, these shiny, new technologies are not without serious risks regarding privacy, material […]

The post Do not let Big Bro in! – Security and privacy in elderly care appeared first on Polaris website.

]]>
In our previous post we talked about how elderly care is becoming one of the most fundamental challenges across the world. It is clear that we will need all the tricks IT can offer, be it cloud computing, edge devices or AI. However, these shiny, new technologies are not without serious risks regarding privacy, material loss or even immediate danger of life. As elderly care regards, these risks  are even more pronounced. 

Picture this. You think your grandma is fine as her well-being is monitored indoor (using cameras, lidars, etc) and outdoor (via wearable devices, etc). But what if someone can tap into the data communication and can see when the apartment is empty. Or by stealing the biometric data, it is a piece of cake to steal money or commit fraud.  What is even worse, what if someone can fiddle with the smart pacemaker or the insulin pump remotely? Well, actually, it did already happen (shorturl.at/bkrX2). 

While all these hybrid (physical and cyber security) issues would worth a separate post on their own, we now want to introduce you to another aspect of privacy concerns: learning from highly sensitive data. 

As we have talked about how data is important for learning complex patterns of the world, it is no surprise that health care monitoring or modeling behavioural patterns need lots of patient data. Those data can be as simple as the number of doctor-patient contacts a month or as complex as heart rate variation on a second by second basis. The problem is that we need to make sure that no personal information (“meta data”) gets mingled with the data needed to train the AI models.  Why is that? Well, making such sensitive data open can pose direct threat to the participants. What is more, there is an indirect risk that can hurt even those who are not providing data to the training process, but are somewhat related to the patients. 

For health monitoring, the problem is not limited to the model training phase. Continuous monitoring of the participants requires to maintain contact and repeated access to sensitive data. This data is then used to provide predictions as well as useful information to update (fine tune) the learning models (continual learning scenario). So how can we secure the flow of sensitive data? And how can we make sure that personal information is not getting into the wrong hands? 

There are existing solutions that either try to hide or erase sensitive information (various kinds of anonymization) or try to deeply encrypt the communication channel. 

However, there is another smart idea that is designed to render the communication of sensitive info unnecessary. This approach is called federated learning. Let us see what this is all about. 

Federated Learning (FL)

According to WIKI: “Federated learning is a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them. Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus allowing to address critical issues such as data privacy, data security, data access rights and access to heterogeneous data. Its applications are spread over a number of industries including defense, telecommunications, IoT, and pharmaceutics.

The term was coined by Google (https://arxiv.org/abs/1602.05629v3) back at 2017.

Source: https://blog.nimbleedge.ai/federated-learning-101/

Let us dive into this complex definition. The first interesting technology involved is called distributed learning. To brush up our knowledge, let us talk a bit about machine learning, in particular, supervised learning. Here the task is to learn to associate labels with data. Machine learning algorithms learn the association by incrementally tuning parameters that define the chain of transformations that make up the algorithm. Now we talk about millions or billions of those parameters! That  explains why training is so tedious in most cases. However, if several machines can parallel work on different bunch of data, then training becomes much faster if the trained model variations are properly combined into one single solution. The other thing that pops up is that FL is ideal when privacy preserving is of central importance. The whole idea is about minimizing data exchange between the clients (unit that can train a model on local data) and the server (a unit that aggregates local model updates, organizes parameter exchange, but does not have access to data). This particular issue is getting so important that it makes FL a central part of all AI applications across various indsutries and business: Google, IBM, Intel, Baidu or Nvidia have all come up with their enterprise grade FL frameworks (shorturl.at/dgiy7)!

The original idea was based on the assumption that edge devices (like smart phones) can both collect and process data locally. In turn, if models can fit into the phone’s memory, than it is enough to exchange local updates with a central model.  This concept is called cross-device FL. Personalized texting like Gboard is using this approach. 

Well, texting is fine on mobiles, but measuring blood sugar? So there is another real-life scenario. You already have shared your medical data with your doctor so as have all her other patients. In turn, the health care center can tune its own model using all the available data. Centers can exchange the model parameters without exposing their own patient data. This approach is called cross-silo FL.  

See the picture below! Normally data are collected and aggregated across the different locations and a central unit trains a model using all the data collected. This setup definitely raises the red flag as sensitive medical records are moving around. But here comes cross-silo FL to the rescue! Privacy is preserved, well done!

Source: https://openfl.readthedocs.io/en/latest/overview.html

Clearly, cross-device and cross-silo FL types define the scaling dimension of FL. In the last few years many new ideas have been discussed and now there are at least 6 factors that are needed to differentiate between the various solutions.

Source: https://arxiv.org/pdf/1907.09693.pdf 

Data partitioning is about how participant and their features (data records) are treated across the different local models. While the original idea assumed that each client node has the very same representation on the participants, there are reak life scenarios where data gets partitioned by feature sets and not by user id. As an example, a bank and an insurance company may have access to different data on the very same user, yet they can mutually improve each other’s models.  

Machine Learning modeling is about the core model applied within FL. The more complex the model, the more update exchange is needed. As FL regards, the most important question is how to aggregate the local updates when facing reliability and communication bandwidth issues. 

Privacy Mechanism is a core component of the FL frameworks. The basic idea is to avoid information leakage amongst clients. Differential privacy (that is to separate user specific and generally relevant information) and cryptography are two frequently used approaches, but this is a constantly evolving field. 

Communication Architecture. The original idea suggested an orchestrated approach to model training where the central server holds the aggregated model that is mirrored in the local units. More recent solutions drop centrality and suggest various decentralized updating mechanisms. In these solutions client nodes communicate with a few peers and there is a particular policy on update propagation.

We talked about Scaling, and the last point is about the Motivating Factors for applying FL. In some cases, stringent regulations force us to turn to FL (consider GDPR in Europe, CCPA in the US or PIPL in China). In other cases shared cost and increased reliability could be the main driving forces. 

If you wonder why we have so many factors to check just think about the immensely different challenges in  e-commerce (personalized ad), finance (fraud detection) or healthcare (remote diagnostics, etc, see https://www.nature.com/articles/s41746-020-00323-1). Different requirements require different solutions. 

So what are the main challenges that FL solutions meet?

Communication efficiency

Updating large models requires sending large messages. Another problem is limited bandwidth: when a large number of clients try to send data, many will fail.  The solution for the first problem involves a form of compression, while the second one is addressed by the introduction of decentralized (peer2peer and gossip) networks, when updates are exchanged locally. One example solution is depicted in the figure next:

Source: https://arxiv.org/pdf/1905.06731.pdf 

Privacy and data protection

While raw data stay  where it was generated, model updates can be attacked and reveal private information. Some solutions are built around differential privacy, where only statistical (general) data are extracted and used for model training (https://privacytools.seas.harvard.edu/differential-privacy). Another interesting idea is to perform computation on encrypted data only (“homomorphic encryption” for those who like scientific terms). Yet another idea goes to the opposite direction: let us spread the sensitive data across many data owners, but computations can only be done in a collaborative fashion. Cool, isn’t it?

Diverse hardware 

For really large FL systems, nodes are most likely quite different in terms of storage capacity, computing power, and communication bandwidth. And only a handful of them participates in the update at a given time, resulting in biased training. Solution? Asynchronous communication, sampling of active devices and increased fault tolerance. 

Messy data

Clients may get different data in terms of quality (noisy, missing info, etc) and statistical properties (difference in distribution). That is big one and it is not easy to fix or even to detect. What is even worse, nodes with their local models can be compromised to enable a “model poisoning” attack (https://proceedings.mlr.press/v108/bagdasaryan20a.html): specially crafted data and local model updates drag the aggregated model toward an unwanted state causing erratic behavior and damage. 

If you have read this far, you must share our enthusiasm for FL. If you are willing to get your hands dirty, here are some open-source FL frameworks to play with: 

If you have any questions, have interesting ideas, or just want to talk about FL, just drop a mail!

The post Do not let Big Bro in! – Security and privacy in elderly care appeared first on Polaris website.

]]>
Stay strong or how AI can help in elderly care https://polarisitgroup.com/2023/02/23/stay-strong-or-how-ai-can-help-in-elderly-care/ Thu, 23 Feb 2023 09:18:23 +0000 https://polarisitgroup.com/?p=5894 In this and two upcoming posts we are going to talk about elderly care.  What does it have to do with next generation technology? Well,  as aging is one of the most pressing issues that globally affect all societies,  the power of data-driven AI methods will be pretty much needed. In this post our goal […]

The post Stay strong or how AI can help in elderly care appeared first on Polaris website.

]]>
In this and two upcoming posts we are going to talk about elderly care. 

What does it have to do with next generation technology? Well,  as aging is one of the most pressing issues that globally affect all societies,  the power of data-driven AI methods will be pretty much needed.

In this post our goal is to show the complexity and importance of elderly care, while the next post is about some of the application areas where modern Data Science and Artificial Intelligence can be the secret weapon. In the last post we are getting a bit more technical and talk about some of the smart ideas and technologies that allow for efficient use of AI in the future.

One of the challenges facing the modern world is the problem of the aging population. And it is a misconception that it only concerns developed countries. It is a problem faced by most countries in the world.

The elderly population is defined as people aged 65 and over. [1]

The following graph is a projection of the aging trends for various geographic regions:

Source: ENSZ (2017) World Population Prospects: the 2017 Revision.

Interesting statistics on the aging population are provided by the WHO [2]. Let’s take a look at some selected predictions for next years:

  • Between the years 2015 and 2050, the proportion of the world’s population over 60 years will increase from 12% to 22% (The number of elderly people will almost double).
  • By 2020, the number of the elderly (aged 60 years and older) will be greater than  the number of children younger than 5 years.
  • In 2050, 80% of older people will be living in low- and middle-income countries.

So we all need to think about how to alleviate the burden on the healthcare system and our society due to the likely demographic shift.

To see the future now, let us just take a look at Japan as it is home to the oldest society. According to [4]  these are the not too bright projections for Japan:

  • 2020: Half of female population will be more than 50 years old. (checked)
  • 2021: A lot of separated employees because of nursing care for their family. (checked) 
  • 2024: 33% of the population will be more than 65 years old. 
  • 2025: Population-shrinking will start even in Tokyo. 
  • 2026: More than 7 million people with dementia. 
  • 2027: Blood for transfusion will be scarce. 
  • 2030: Big department stores, banks, and retirement homes will close their branches in smaller cities.  
  • 2035: 33% of male population and 20% of the female population will live and die single. 
  • 2039: Serious lack of crematory. 
  • 2040: Half of provincial governments will disappear. 

Not surprisingly, Japan is one of the pioneers regarding ICT and smart solutions for health and elderly care. As an example, due to cultural conflicts [5], they have been the first to use service robots in hospitals and other institutes, instead of employing foreign workforces.

Source: https://foreignpolicy.com/2017/03/01/japan-prefers-robot-bears-to-foreign-nurses/

Before talking about how AI will save the world – ok, it is a bit of exaggeration- let us see what are the most fundamental consequences and what changes in infrastructure, policy and services can be foreseen.  

The ever growing rate of those who might need extra help in various activities will shift focus in policy making and social priorities. For one, the elderly will remain valuable consumers, yet their needs are different from those who are still active  (changing infrastructure and business environment). For two, as the proportion of the active population decreases, personal care cannot be provided for everyone (shortage in skilled workforce). For three,  services – in particular healthcare – will be more expensive as the active payers’ proportion decreases. 

The missing labor power is a general problem and better process optimization and automation will mitigate the issue to some extent. For skilled labor, AI with learning capabilities can provide some leverage.   However, for elderly care, there are very specific requirements that present a real challenge for the strategy makers for tomorrow’s technology.

So let us see what are the most frequent issues for elderly care.

Source: https://www.ncoa.org/article/the-top-10-most-common-chronic-conditions-in-older-adults

Aging affects both our physical and mental well being.  According to the WHO report [2] the most common health conditions of the body  associated with aging include: hearing loss, cataracts and refractive errors, back and neck pain, osteoarthritis, chronic obstructive pulmonary disease, diabetes. The most outstanding mental or psychological issues are loneliness, exclusion, depression and dementia. These conditions often interact and worsen each other. Feeling of exclusion resulting from the low level of physical activity will deepen depression.  The positive side is that if we can compensate for a particular loss (like helping in locomotion) we can actually induce improvement in many other conditions.  

However,  due to many unknown factors (like climate change or breakthroughs in medicine), some of these conditions may become less important, while new ones emerge. 

Now let us see what specific requirements must be met if we want to successfully create new AI based technologies:

  • Probably the most important requirement is to make humane solutions. Human caregivers should provide more psychological support (communication, empathy, availability/presence), while technology should be used for the rest (diagnosis, monitoring, etc, [3]) -> Human-centered solutions 
  • The majority of the people over 65 may have more than one of the conditions listed above. So general solutions just fail. -> Highly flexible, customizable solutions
  • Next generation technologies are increasingly difficult to adopt for not only the subjects, but also for the caregivers -> User acceptance, special user support
  • Centralized health care institutes have limited capacity or are not available to everyone -> decentralized, distributed, localized solutions
  • If hardware is involved, maintenance can also be an issue -> low cost, yet robust and reliable solutions are needed
  • People as well as health conditions are going to evolve -> highly adaptive and evolving solutions are needed 

Clearly,  AI is much needed when it comes to adaptive solutions, flexibility and learning. Also, we talk about large scale challenges where we need to understand the general as well as the unique factors that define each and every case.  And here comes Data Science to our rescue!

You can read about some of our exciting projects in the following posts.

 

References

[1] ‘OECD Data – Demography – Elderly population’. Accessed: Jun. 28, 2022. [Online]. Available: http://data.oecd.org/pop/elderly-population.htm

[2] ‘WHO Fact Sheet – Aging and Health’. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/ageing-and-health

[3] ‘[EC] Market study on telemedicine’. 2018. [Online]. Available: https://health.ec.europa.eu/system/files/2019-08/2018_provision_marketstudy_telemedicine_en_0.pdf

[4]  Kawai, M. , Mirai no Nenpyo (=Future Chronologic Table), Tokyo: Kodansha. 2017

[5] https://foreignpolicy.com/2017/03/01/japan-prefers-robot-bears-to-foreign-nurses/

The post Stay strong or how AI can help in elderly care appeared first on Polaris website.

]]>
All you need is AI, AGI, XAI! https://polarisitgroup.com/2023/02/09/all-you-need-is-ai-agi-xai/ Thu, 09 Feb 2023 15:58:39 +0000 https://polarisitgroup.com/?p=5853 At ISRV plc we are on the lookout for powerful IT tools to always meet the highest expectations. That is why we are deeply engaged with AI (Artificial Intelligence).  We actively use and improve AI based solutions in various fields spanning from security solutions to traffic (in particular, airport runways ), analysis of behavioral patterns […]

The post All you need is <span style="text-decoration: line-through">AI, AGI,</span> XAI! appeared first on Polaris website.

]]>
At ISRV plc we are on the lookout for powerful IT tools to always meet the highest expectations. That is why we are deeply engaged with AI (Artificial Intelligence).  We actively use and improve AI based solutions in various fields spanning from security solutions to traffic (in particular, airport runways ), analysis of behavioral patterns or the improvement of industrial processes. 

AI is definitely the magic word of our age so it is only natural that it is the central topic of our first blog! 

There are tons of AI related news and amazing success stories, yet there is still much confusion about the very meaning of the terms used throughout. Also, it is alarming that failures are usually not reported openly nor it is clear how success is measured. 

In turn, we do not talk about the everyday miracles. Instead, we focus on how to bridge the gap between business and technology. While we do believe that AI is going to stay with us, we also believe in the importance of clear communication with our clients and critical approach to technology.

The momentum of AI is is clearly mirrored by the sheer amount of money spent on funding AI-based startups:

 

Source: https://www.cbinsights.com/research/report/ai-trends-2021/ 

 

Is it just a prelude for the next dot-com bubble, or is there indeed potential in this new technology? 

The very existence of our applied AI department shows our commitment to the thinking and methodology behind the umbrella term AI. This blog highlights one factor that make us think that AI will stay with us for a long time. 

 First, let us clarify the confusion surrounding terms like AI and Machine Learning. 

What makes a system ‘AI’? For such a popular term, it is quite surprising that there is no scientific consensus about its meaning.  To be able to define this term,  we need to define what makes a natural system intelligent. To answer this seemingly simple question is the holy grail of a great many scientists of cognitive philosophy, computer science, robotics, and developmental sciences.  Instead of deep diving into this exciting question, I’d rather focus on some aspects of intelligence that fill us with awe when we think about our own mental faculties. 

By remembering the past, we can recognize and solve challenges similar to what we have seen before. By decomposing big problems into smaller puzzles we learn how to make plans and strategies that will be useful for future problems. 

The combination of remembering, learning, and prediction is the foundation of generalization skills, something that machines are still lacking. So no worries, the era of artificial general intelligence (AGI) hasn’t yet arrived. 

We believe, in most cases, the term ‘AI’ is misused as it refers to this non-existing AGI. Instead, AI should only be used for systems designed for solving one particular problem (“narrow AI”). In this sense, AI solutions may be seen as imitations of one of our cognitive processes, like scene or text understanding. 

In practice, most AI solutions can be seen as a question-answering game: for a given input (image, text, time series) assign a good label (annotation, related info, prediction of the next incoming data, etc). Questions and answers may follow a pattern and when enough such question-answer pairs are presented, the AI system can learn to recognize the pattern. The power of AI is the ability to find and learn these patterns without human interaction or predefined models or concepts. The learned patterns then guide the system to guess the missing parts when just one piece of the pattern (the question) is presented. 

Learning is a key concept of intelligence and so nature equipped us with several means. Imitation learning, instruction-based learning, example learning, or curiosity-driven learning are all needed to help survival. 

Computer systems extract that hidden information about the question-answer relationship via one of these learning mechanisms, but there is no general theory about mixing these processes. 

I think this is one of the reasons why most scientists talk about machine learning and avoid the use of AI. 

While it is true that real artificial intelligence has yet to come, the general concepts and the mature tool sets of machine learning are extremely powerful. At Polaris and ISRV we apply similar ideas for quite different tasks, like object detection on traditional RGB or thermal images or instance segmentation on 3D point clouds.  

There are a large number of different machine learning models that vary in the applied learning mechanisms and the way the problems are presented to the computer. In general, increased complexity yields increased accuracy (higher likelihood of providing the right answer). But there is no free lunch: the models cannot be used to give insight into the interrelationships found in the data, nor we can see how the model got to the conclusion. In other words, explainability got lost somewhere. The next figure shows the connection between the model’s average performance (accuracy) vs explainability.

Source: https://www.darpa.mil/program/explainable-artificial-intelligence

You can ask why is it important to understand the internal workings of these complex machines if the end results are so good. 

Some say, it is not important, but there is an increased interest in the academy and in the industry as many believe that treating AI solutions like a back box is actually dangerous and can lead to disasters.  According to a 2019 PwC study [8], most CEOs interviewed believe that AIs must be explainable to be trusted. 

If we cannot explain what is going on, how do we know if we can trust the system in the future? How do we know if it fails or when it fails? What can we gain from the learned patterns? 

Explainable AI (XAI) is all about these issues. As the use of AI becomes more pronounced, there are more and more areas where trustworthiness and accountability are of central concern. The source of the image above  is DARPA (Defence Advanced Research Projects Agency), the organization responsible for US military research. No surprise that they are  among the first large organizations dealing seriously with XAI: they know a bit about the risk of using opaque tools in decision making. Military operations are not the only field where decision-making can gain a lot by using proper AI. Critical infrastructures like spaceships or traffic control will all depend on AI one day.  But there are other fields where we already witness smaller or larger hiccups caused by the blind use of AI: healthcare (think of personalized medicine), law, and insurance, to name a few.  

In addition, explainability is tied to privacy (GDPR’s ‘right to explanation’), so it becomes mandatory to use interpretable AI solutions [5].

XAI is a set of tools that can be used to show how the given machine learning models make decisions. The aim is to provide insights into how models work, so human experts are able to understand the logic that goes into making a decision. 

Due to the diversity of applied AI regarding domains, data types, methods, and scope, there is no ‘One fits all’ XAI solution. But thanks to the increased awareness of the stakeholders across organizations, there are dozens of new methods that help transform oblique AI methods into XAI that is (more) transparent, confident, trustworthy, and interpretable. The following figure shows the number of published reports on XAI in the last couple of years:

Source of data: shorturl.at/knAR9

A recent review of XAI papers [7] published in high-ranked journals showed the following summary about the application of various XAI methods:

Source: https://arxiv.org/abs/2107.07045

As we see, there are many domain-specific solutions, but a large number of methods can be applied generally. The distribution of the underlying machine learning methods follows current trends, and neural networks (deep learning) solutions take the lead. Traditionally it is a bit easier to provide a posthoc analysis and to peek into the mechanism of the applied models. Nevertheless, new ideas help create new solutions that are inherently explainable. The scope of explainability is about analyzing the response to a single input or analyzing average behavior. Finally, an explanation should be human-centric: the way how it is presented matters. So methods are characterized by yielding results in the form of visualizations, rules, text, or numbers. 

Here are some examples for local methods, where model output to a given instance can be interpreted. There are global and mixed methods as well, and most methods can also be applied to other modalities like text, audio, or multimodal signals. 

A model agnostic method, called LIME -Local Interpretable Model-Agnostic Explanations- [9] creates a surrogate linear model that learns a relationship between the source model’s prediction and the features of the original input. For images, features can be pixels or pixel groups, for tabular data, features are the distinct columns.  This method gained popularity as it can be applied on top of many different models and methods. 

In multi-class classification problems, models are often used to provide top-N potential labels for a given input. The next example analyzes the top 3 labels yielded by a neural network trained on a large image dataset:

The top three predicted classes are “tree frog,” “pool table,” and “balloon.” Sources: Marco Tulio Ribeiro, Pixabay (frog, billiards, hot air balloon).
https://www.oreilly.com/content/introduction-to-local-interpretable-model-agnostic-explanations-lime/ 

For tabular or structured data, where columns represent attributes or well-defined features, feature importance and simple feature interactions can be estimated and visualized using, for example, the SHAP method -”SHapley Additive exPlanations”- [10].

Kaggle uses a FIFA dataset that contains data about several matches. Several tutorials are about analyzing the importance of the individual features when predicting if a given team will have the ‘man of the match’ badge. There are attributes like Opponent Team, Ball possession %, Goals scored, etc.

The following example shows the contribution of each feature to the overall prediction of a model:

 

Source: https://www.kaggle.com/code/dansbecker/shap-values/tutorial 

 

Red features increase, blue features decrease the predicted output from the  averaged model output. When calculating feature importance, interactions between features are also accounted for. 

While these are toy examples, explainability is of critical importance in our work of line.  For example, we work on a multi-purpose remote sensing autonomous device that can be used to detect surface anomalies or foreign objects. To make it robust, multiple sensors are integrated. Think about detecting loose parts in a highly automized factory plant or debris on the airport runways. The challenge is that we cannot define beforehand what we should find as anomaly is just a deviation from expectations. In turn, the system may miss something important or raises false alert. For security and insurance purposes, we log the events so we can troubleshoot the system or explain the findings. Our solutions is thus transparent and trustworthy by design. 

We believe this summary can help build trust by spreading the word that transparent AI is the way to go and when used properly, AI-based tools can help create innovative and efficient solutions to old and new problems as well.

We hope we can help your business grow with our AI experience and passion!

In the following posts we present the areas and projects where we are actively engaged in research and development.

References:

[1] https://towardsdatascience.com/5-significant-reasons-why-explainable-ai-is-an-existential-need-for-humanity-abe57ced4541

[2] Gunning, D., Vorm, E., Wang, J.Y. and Turek, M. (2021), DARPA’s explainable AI (XAI) program: A retrospective. Applied AI Letters, 2: e61. https://doi.org/10.1002/ail2.61

[3] http://www-sop.inria.fr/members/Freddy.Lecue/presentation/aaai_2021_xai_tutorial.pdf

[4] https://proceedings.neurips.cc//paper/2020/file/2c29d89cc56cdb191c60db2f0bae796b-Paper.pdf

[5] Recital 71 eu gdpr. https://www.privacy-regulation.eu/en/r71.htm, 2018. Online; accessed 27-May-2020.

[6] https://proceedings.neurips.cc//paper/2020/file/2c29d89cc56cdb191c60db2f0bae796b-Paper.pdf

[7] Gohel, Prashant et al. “Explainable AI: current status and future directions.” ArXiv abs/2107.07045 (2021).

[8] https://www.computerweekly.com/news/252462403/Bosses-want-to-see-explainable-AI 

[9] Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144

[10] Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777.

The post All you need is <span style="text-decoration: line-through">AI, AGI,</span> XAI! appeared first on Polaris website.

]]>
AI to help green asset management: Monitoring urban trees https://polarisitgroup.com/2023/01/30/ai-to-help-green-asset-management-monitoring-urban-trees/ Mon, 30 Jan 2023 20:21:22 +0000 https://polarisitgroup.com/?p=5789 Nowadays, there is no IT news without mentioning AI related technologies. As IT has become central part of our entire economy, AI follows a similar trend and is going to be used where digital data are generated. That is, everywhere.  Every process of business – be it manufacturing, marketing, HR, finance or logistics- is being […]

The post AI to help green asset management: Monitoring urban trees appeared first on Polaris website.

]]>
Nowadays, there is no IT news without mentioning AI related technologies. As IT has become central part of our entire economy, AI follows a similar trend and is going to be used where digital data are generated. That is, everywhere.  Every process of business – be it manufacturing, marketing, HR, finance or logistics- is being transformed by AI. How do we know if AI is not just another hype and that data driven thinking will stay? AI is not a pet project in a university lab, technology is getting mature, new discoveries get into production at an unprecendented pace. With the proper use of AI technology complex systems become more transparent and explainable, control or at least forecast becomes easier.  

To make it all happen, we need new perspectives, new organizational models, new IT solutions.

However, while many applications of next-gen AI technology are truly amazing, the role of AI in most use cases  are less articulated or hard to explain. 

As we understand business logic, we can help identify, analyze and create optimal solutions for our business partners via clean and transparent methods. Magic is in human creativity and not in “AI black boxes” anymore. In this blog we are excited to showcase one particular application where AI has a significant impact on the life quality of an entire population.  


The task: green asset management in Singapore

Forrás: https://cutt.ly/vkW439J 

Singapore –or the Garden City– has earned the badge of the greenest city on Earth having  about 1-2 million planted trees. However, the maintenance of the complex system of the various parks, forests and streets requires an enormous amount of manpower, resources and skills. Climate change makes the task even more difficult as local weather conditions have significantly changed in the last few years, shortening life expectancy of the trees and helping the spread of various diseases (funghi, bacteria, insects). Since the life quality of the citizens depends a lot on the well being of the vegetation, proper responses must be taken to preserve and improve the green areas.  

What is needed then? NParks – responsible for all public green areas within Singapore- put together an action plan (https://cutt.ly/zkKamWm ). First, an automated monitoring system was developed that not only helps to assess the present state of the environment, but also it is the foundation of environmental modeling that is used to direct future plantations and support risk analysis (like identifying trees that will likely fall).  As a second step they need a computer aided system that can automate some parts of the data analysis so man power can better be utilized. Finally they will need a system that can create and fine tune weather and environmental models that can take into account all kinds of data to be collected in the future regarding the physical conditions of the city. 

 

Automation in the data collection process is already a huge step, as one expert can only examine a few dozen trees per day during field work.  Data collection now takes place as cars cruise the city and collect high-resolution LIDAR (Light Detection And Ranging) and panoramic images that capture the surroundings of the car.  

 

Images taken at various angles on the same object can be used to identify the relevant objects and pin their 3D position. The corresponding LIDAR 3D point cloud can then be used to reconstruct the detailed 3D morphology of the identified tree: tree height, diameter and angle of the tree trunk, position of the first branch bifurcation, volume of the green canopy, etc.
Source: https://cutt.ly/0kW4OMU 

 

In the beginning there was an attempt to manually reconstruct the trees from the 3D data points using those methods that surveyors still use. However, this approach was painstakingly slow and imprecise.

Ideally, the 3D point cloud obtained via LIDAR measurements can be segmented into meaningful parts like canopy, branches and trunk. Fine grained modeling of these parts can help estimate the entire morphology of the tree. Source: https://cutt.ly/AkW4VeT

Just thinking about the sheer amount of data makes one dizzy. That is why data processing must be automated as well. And here comes the challenge! How can we transfer the experts’ domain knowledge into something that a computer can use? 

The low reliability of the aforementioned traditional approach stems from the fact that the entire approach was built on a set of complex and rigid rules. So our main task was to come up with a solution that can handle highly complex examples and can localize the trees and provide a geometrical model of each tree from which all the required parameters can be efficiently (fast and precisely) derived. 

 

Solution in theory

We can see the widespread use of 3D models in various fields from infrastructure maintenance to healthcare (Engineering is basically all about 3D modeling nowadays.)  Doctors can actively use 3D models previously created from imaging data during surgery, for example. Or think about self-driving cars. When moving, the system must quickly recognize all relevant static and moving objects. 2D images, however, are often hard to interpret or are too noisy to be helpful (low visibility, clutter, etc). On top of the physical limitations of the sensors, the amazing variety of the objects make the task even harder. That is why that rule based solutions just fail. So here comes Artificial Intelligence (well,actually it is better called Machine Learning (ML)) to our rescue! ML is an umbrella term for various methods that work quite well in real life scenarios. 

The key aspect of these various methods is that they can extract and learn patterns or relationships between data (observations) and their annotations (tags given by human experts). 

Learning is about tuning parameters of an algorithm so it becomes better at assigning tags to data.  In our case, data are 3d points, annotations are labels like ‘tree trunk’ or ‘canopy’.  By assigning a tag to each individual point, we actually do semantic segmentation: points are binned into meaningful groups or clusters.. When similar objects need to be detected and separated, then an additional label is used as an ID: points labeled with the same ID belong to the same object. This task is usually referred to as instance segmentation.   


Solution in practice

Data, data, data

There are quite a few 3D point cloud annotated datasets available, most have different shortcomings that make them useless for our project. Some do not have information about trees, or their resolution is too low or the applied categories are just not fine enough. As an example,  ‘high vegetation’ and ‘low vegetation’ labels are used in https://www.semantic3d.net/.

For this reason, our very first step was to help our partners create a large and heterogeneous training dataset from their own measurements. As several sensors are used, the dataset contains 3D point clouds and corresponding 2D images. Since the point clouds are not structured (as opposed to 2D images where each data point is linked to a grid), the resolution is high, the data are noisy and the objects are very complex, an ideal dataset would consists of a few thousand annotated point cloud (“3D volume”) and annotated (segmented) images. In reality you cook with what you have. All the results presented below were gained using only a few hundred volumes only! 


Model selection and training

At the time we started working on this project, there were only a few efficient models available that could infer 3D object morphology from unstructured 3D point clouds. We ruled out some as they required way too many data or seemed to be too slow to be of practical use. There was an important lesson. While there are many more images and the 2D segmentation algorithms are quite mature, the analysis of 3D models is actually faster as 3D data is more sparse. So instead of following a complicated approach where we directly integrate 2D and 3D data, we have built and tuned separate models for 2D and 3D segmentation. The final solutions included 2 steps. First, we trained a model to locate the trees (the center of the tree trunks on the ground). Second, we trained a combined model of two previously published solutions to segment the individual trees as well as their parts.

 

An example for the output of our solution fine tuned on tree data:

3D semantic segmentation. On the left we see the original view as captured by a 2D image. On the right we can see that the model separete the electric pylons from the canopy and large branches. The colors (“mask”) come from the 3D segmented data so the coordinates of each point can be mapped onto the 2D representation.


Evaluation
 

The performance of the trained models was measured as accuracy on a portion of the annotated dataset that was never used during training. This process imiates how the model would work on future data thus making the evaluation more objective. The predicted labels were compared to the human annotations. The accuracy of the tree detection and the segmentation of the tree parts was the following: 

 

MIOU Background Canopy Trunk-branches
2D 97% 90% 70%
3D 99% 96% 78%

 

The metric is the so called mIOU (mean intersection over union), which is the normalized intersection of the real and predicted areas/volumes.

The next image is a visualization of the reference (human annotation) and the computer annotation of the same data:

The trained models can segment the tree parts (A) and the individual trees (B) 

Turning the solution into production

In this cooperation our main contribution was about creating the core of the ML solution. However, the work did not stop there. We have adviced our partner about the ideal software and hardware infrastructure and provided further support for integration our solution to the existing environment of NParks.

The final solution was created in a private cloud. The preprocessing of the incoming sensory data, model inference and logging and storage of the results take place in an efficient, distributed manner, where each step can be visualized and manually corrected if necessary. The models can be further improved by exploiting what we learn from those difficult cases where human correction are needed. For that, there is a dedidated loop-back system that collects the flagged cases and initiates a new learning process.

 

The model works even for heavily cluttered tree groups, albeit with lower accuracy. Here it is also true that an ML solution is as good as the training set is. In this particular case annotation proved very difficult even for the human annotators.


How can we help?

During the project we have met quite a few challenges typical to other practical tasks: “translating” the task into the language of algorithms, analysis and improvement of already eisting solutions, creation of proper data for training, etc. Hopefully this blog has given a useful and entertaing peek into the world of applied AI in business and industry. 

If yo have a complex problem that you think could be solved by our AI stack, then let us have a talk! 

The post AI to help green asset management: Monitoring urban trees appeared first on Polaris website.

]]>
SUCCESSFUL ENCLOSED STREAMING COLLABORATION https://polarisitgroup.com/2022/04/13/successful-enclosed-streaming-collaboration/ Wed, 13 Apr 2022 11:20:47 +0000 https://polarisitgroup.com/?p=5581 Another successful project thanks to Enclosed Streaming technology. Our agency partner, Budapest-based Universum 8 Zrt., concluded a months-long project on 20 March, which focused on a live streamed event broadcast from multiple locations. The Enclosed product family developed by ISRV Zrt. includes secure data transmission and high quality and availability streaming solutions. Using Enclosed Streaming’s […]

The post SUCCESSFUL ENCLOSED STREAMING COLLABORATION appeared first on Polaris website.

]]>
Another successful project thanks to Enclosed Streaming technology.

Our agency partner, Budapest-based Universum 8 Zrt., concluded a months-long project on 20 March, which focused on a live streamed event broadcast from multiple locations.

The Enclosed product family developed by ISRV Zrt. includes secure data transmission and high quality and availability streaming solutions. Using Enclosed Streaming’s solutions and services, the agency delivered an exceptionally successful project. The stream, broadcast from studios in Dubai and Budapest, was watched live by tens of thousands of viewers and the entire show was viewed by nearly 1 million. During the streaming, chess grandmaster Judit Polgár set a world record in simultaneous play at the Dubai World Expo, where the Hungarian pavilion of Expo 2020 Hungary provided the venue and the production of the entire show was centralised from the agency’s Budapest studios.

We are proud to have contributed our expertise to such a prestigious project. Thanks to the streaming experience of our colleagues, we were able to set up a streaming system in Budapest that allowed the agency to broadcast the almost 9 hours of uninterrupted live show in high quality, continuously and safely. 

Thank you for your trust and congratulations on the success of the project.

The post SUCCESSFUL ENCLOSED STREAMING COLLABORATION appeared first on Polaris website.

]]>