Friday, 31 July 2020

Git Plugin Performance Improvement Phase-2 Progress

The second phase of the Git Plugin Performance Improvement project has been great in terms of the progress we have achieved in implementing performance improvement insights derived from the phase one JMH micro-benchmark experiments.

What we’ve learned so far in this project is that a git fetch is highly correlated to the size of the remote repository. In order to make fetch improvements in this plugin, our task was to find the difference in performance for the two available git implementations in the Git Plugin, git and JGit.

Our major finding was that git performs much better than JGit when it comes to a large sized repository (>100 MiB). Interestingly, JGit performs better than git when size of the repository is less than 100 MiB.

In this phase, we were successful in coding this derived knowledge from the benchmarks into a new functionality called the GitToolChooser.

GitToolChooser

This class aims to add the functionality of recommending a git implementation on the basis of the size of a repository which has a strong correlation to the performance of git fetch (from performance Benchmarks).

It utilizes two heuristics to calculate the size:

  • Using cached .git dir from multibranch projects to estimate the size of a repository

  • Providing an extension point which, upon implementation, can use REST APIs exposed by git service providers like Github, GitLab, etc to fetch the size of the remote repository.

Will it optimize your Jenkins instance? That requires one of the following:

  • you have a multibranch project in your Jenkins instance, the plugin can use that to recommend the optimal git implementation

  • you have a branch Source Plugin installed in the Jenkins instance, the particular branch source plugin will recommend a git implementation using REST APIs provided by GitHub or GitLab respectively.

The architecture and code for this class is at: PR-931

Note: This functionality is an upcoming feature in the subsequent Git Plugin release.

JMH benchmarks in multiple environments

The benchmarks were being executed on Linux and macOS machines frequently but there was a need to check if the results gained from those benchmarks would hold true across more platforms to ensure that the solution (GitToolChooser) is generally platform-agnostic.

To test this hypothesis, we performed an experiment:

Running git fetch operation for a 400 MiB sized repository on:

  • Windows

  • FreeBSD 12

  • ppc64le

  • s390x

The result of running this experiment is given below:

Performance on multiple platforms

Observations:

  • ppc64le and s390x are able to run the operation in almost half the time it takes for the Windows or FreeBSD 12 machine. This behavior may be attributed to the increased computational power of those machines.

  • The difference in performance between git and JGit remains constant across all platforms which is a positive sign for the GitToolChooser as its recommendation would be consistent across multiple devices and operating systems.

Release Plan 🚀

JENKINS-49757 - Avoid double fetch from Git checkout step This issue was fixed in phase one, avoids the second fetch in redundant cases. It will be shipped with some benchmarks on the change in performance due to the removal of the second fetch.

GitToolChooser

  • PR-931 This pull request is under review, will be shipped in one of the subsequent Git Plugin releases.

Current Challenges with GitToolChooser

  • Implement the extension point to support GitHub Branch Source Plugin, Gitlab Branch Source Plugin and Gitea Plugin.

  • The current version of JGit doesn’t support LFS checkout and sparse checkout, need to make sure that the recommendation doesn’t break existing use cases.

Future Work

In phase three, we wish to:

  • Release a new version of the Git and Git Client Plugin with the features developed during the project

  • Continue to explore more areas for performance improvement

  • Add a new git operation: git clone (Stretch Goal)

Reaching Out

Feel free to reach out to us for any questions or feedback on the project’s Gitter Channel or the Jenkins Developer Mailing list.

Biocon Biologics valued at $3.5 billion after Tata Capital's $30 million financing

Through investments in R&D and high-quality manufacturing infrastructure, we are confident of achieving our aspiration of serving 5 million patients through our biosimilars portfolio, CEO Hamachar said

AI Has Track Record in Fraud Prevention for Credit Card Issuers

By John P. Desmond, AI Trends Editor

The financial services industry has compiled a track record in the use of AI for fraud detection, with AI applications at Visa and Experian being two notable examples.

The multinational Visa reports saving an estimated $25 billion annually from use of AI applications for fraud detection, according to Melissa McSherry, a senior VP and global head of data for Visa, according to an account in VentureBeat. The path to AI Visa chose may have lessons for other companies thinking about how to launch their automation projects.

“We have definitely taken a use case approach to AI,” McSherry stated. “We don’t deploy AI for the sake of AI. We deploy it because it’s the most effective way to solve a problem.”

Melissa McSherry, Senior VP and Global Head of Data, Visa

The Visa Advanced Authorization (VAA) platform scores every transaction that goes across the network, rating each one based on the likelihood it is fraudulent. This allows more transactions to be approved quickly. “With 3.5 billion cards and 210 billion transactions a year, it is really worth it to everyone to make those cards work better and for more transactions to go through,” McSherry stated.

First deployed in 1993, today the VAA has evolved to use of recurrent neural networks with gradient boosted trees. Having the defined use case of fraud detection, has helped Visa to focus on how AI and machine learning can improve its services.

“It helps that we started with the first use case a long time ago,” McSherry said. “There’s no substitute for experience, and I think we have a fair amount of experience at this point on how to build and deploy these models. And so the first lesson is just at a certain point, you have to pick a use case and you just have to start.”

Visa has seen a 20-30% improvement in model performance when advanced AI techniques are applied versus more traditional machine learning technicals such as gradient boosted trees, she noted. In some cases, the improvement has been more than 100%, which speeds the development of new product services. “We are able to put better products in front of consumers faster,” she stated.

Steve Platt of Experian, the global information and credit services provider, also has experience with AI and fraud detection across more than one generation of systems. Now the head of Global Software at Experience, Platt’s first exposure to AI and fraud detection date to January of 2001, when he joined the Hecht-Nielsen Neurocomputer Corp. (HNC) in San Diego to help commercialize software the founder Robert Hecht-Nielsen had originated.

Steve Platt, Group President, Global Business Information Services, Experian

Hecht-Nielsen was a neuroscientist and entrepreneur who had been teaching at the University of California, according to an account in Forbes. He had worked with a small group of academics and researchers on neural networks, a blend of statistics and AI. They developed a fraud detection product called Falcon and had acquired some customers, who liked the product and were looking for improvements – better predictions and more value.

As manager of the product, Platt emphasized building the fraud detection into the approval process for credit approval transactions. The more advanced card issuers could then deliver an authorization in real time; this was before cloud computing.

Platt also concentrated on getting a high volume of good-quality, well-structured data from credit card transactions to help the Falcon machine learning application learn. He also stayed close to early adopter customers, to understand their problems, integrate with their transaction environment and analyze their data. Leading issues including MBNA, Banc One and First Data were approached to partner on a design solution that would work for them.

HNC was sold in 2002 to Fair Isaac Corp., now called FICO, a data analytics company focused on credit scoring services. Now called the FICO Falcon Platform, the product is still in extensive use. Platt worked for the acquiring company for several years, then founded a fraud prevention company called BasePoint Analytics, and then moved to Experian. He’s been there 10 years and  is now the company’s Group President, Global Business Information Services.

The lessons of HNC have served him well. Experian has a number of AI/machine learning-based products on the market, including its core credit score offering, fraud prevention and collections. “We’re in the business of data and data-driven insights,” he stated.

Experian has developed a way to blend software development practices of yesteryear and today’s AI software development. An internal DataLabs organization pursues projects with business units it deems innovative, exploring new data sources, new algorithms and new use cases.  The lab sets up a common method for AI-based product development that employs agile methods and rapid development. The developers work closely with selected customers to build a proof of concept or prototype. They iterate that into a product, then help the customer put it into operation in one or more regions.

The structured methodology enables experience to monitor where they are in the process, and quickly adjust if the business case is changing. One product developed in this framework is Experian Boost, which allows consumers to “boost” their credit scores by providing mobile phone and utilities payments data not captured in the traditional credit scoring process. Though still undergoing testing, it was brought to the market in nine months.

Bigger Banks Using Fraud Detection; Kount Attracts Investment

Financial institutions with over $100 billion in assets are the most likely to have adopted AI and of those, 73% are currently using AI for payment fraud detection, according to a recent survey, AI Innovation Playbook, published by PYMNTS and reported in Forbes. The study was based on interviews with 200 financial executives from commercial banks, community banks and credit unions across the US.

Fraud detection has proved an attractive target market for startups, such as Kount, which launched in Boise, Idaho in 2007. Today the company holds 29 patents and has been funded with an $80 million investment from CVC Capital Partners in 2016.

Kount’s Identity Trust Global Network delivers real-time fraud prevention, account protection, and enables personalized customer experiences to more than 6,500 leading brands and payment providers.

The closing of many businesses during the coronavirus lockdown has led to soaring e-commerce volume, which has presented opportunities for fraud detection players. Kount founder and CEO Brad Wiskerchen recently wrote in PYMNTS.

Brad Wiskerchen, Founder and CEO, Kount

“As the pandemic accelerated in March and April, we saw digital transaction volumes skyrocket for many industries, like vitamins, wellness, electronics, pet supplies and others,” he stated. “In April, purchase volumes for crafts and online wine were up more than 600% from the average February week. Handling that volume while maintaining exceptional customer experiences and preventing fraud requires an adaptable solution that can make accurate decisions in real time.”

The increased e-commerce means sees more businesses offering new digital experiences, including memberships, accounts and loyalty points. “Each represents a unique area of the customer experience that should be protected,” states Wiskirchen.

Read the source articles in VentureBeat, Forbes and  PYMNTS.

Researcher Developing Smart Chips to Address Battery Safety

By AI Trends Staff

The number one issue in lithium-ion batteries powering products from e-bikes and power tools for consumers, to self-driving cars and submarines, is to enhance battery safety, Dr. Rachid Yazami told an audience at the virtual International Battery Seminar from Cambridge EnerTech this week.

Dr. Yazami, founder of Singapore startup KVI, which is developing smart chips to enhance battery performance and safety, is known for his critical role in the development of lithium-ion batteries. In a talk on whether AI can help address battery issues, he outlined the challenges in a battery market that is projected to reach $35 billion in value by 2025. “The market is growing very fast,” he said.

Dr. Rachid Yazami, founder of KVI and contributor to development of the lithium-ion battery

Short circuits in lithium-ion batteries are caused when a thin slip of polypropylene that keeps electrodes from touching is breached, so that the electrodes come in contact and generate heat and possibly fire.

“We don’t want to see vehicles and mobile phones catching fire,” Dr. Yazami said. “These things we have to address very seriously. A lot of progress has been made, especially in increasing the quality of the batteries.”

Additional challenges include: reducing charging time currently ranging from 1.5 to eight hours for an electric vehicle depending on the battery, to less than one hour and even 30 minutes; increasing the driving range from 250 km to 500 km (150 to 300 miles) currently, to 900 km (560 miles); and extending the battery service life, currently at about two years, to 10 years.

In its work to address these challenges, KVI has developed two kinds of chips, one that combines material science with AI to “manage the battery performance in smarter ways,” and the other that helps to manage a new protocol for fast charging. Eventually the chips may be combined, “to get all the advantages.”  Dr. Yazami will be looking for manufacturers to embed the chips in new batteries once they are available,  which he estimated would be in 12 to 18 months.  “We already have working prototypes for ultrafast battery chargers,” he said in response to a question from AI Trends.

The fast-charging chip can measure data on entropy, a property of a thermodynamic system, to assess the state of the battery and of its safety. “We can adapt the charging protocol according to the state of charge of the battery, Dr. Yazami said. “We use the AI to optimize the data processing and the protocols we use to charge the battery,” allowing for optimal charging, not more than the battery can take. The system continues to learn as the battery is used and is aging.

One of the biggest reasons for fires in lithium-ion batteries are internal short circuits, “which happen when the separator breaks or there is a hot spot and it melts,” Dr Yazami said. “That can trigger events that end in a thermal runaway, sometimes fire and explosions.” Detecting this thermal runaway at an early stage is very difficult.

“We have developed some technology to detect it at a very early stage,” he said. He has found a linear relationship between battery entropy, charted on the Y axis, and battery enthalpy, a thermodynamic quantity equal to the total heat content of the system, on the X axis. “We have found very specific cell voltages where we can trigger an alarm.” This allows the researchers to adjust the charging voltages based on the state of health of the battery, using AI to help. “We are developing the software to follow the battery as it is aging.”

The team has also developed a non-linear fast charging solution, in which current and voltage is not constant, but adapts to the battery.

“We have developed solutions based on AI and thermodynamics data to monitor the state of battery safety,” Dr. Yazami said.

Asked by another researcher how he measures entropy and enthalpy for state of charge (SoC) detection, Dr. Yazami said the linear relationship depends on the chemistry and state of health of the battery. ”The linear coefficients evolve as the battery ages. The AI enables prediction of the coefficients to determine the state of health of the battery,” he said.

Learn more about Dr. Rachid Yazami.

AI Can Help Protect Smart Home IoT Devices from Hackers

By AI Trends Staff

As researchers continue to find security flaws in smart home hub IoT devices, part of an immature security infrastructure. One researcher suggests AI can be helpful to address the vulnerabilities.

A cybersecurity team from ESET, an internet security company based in Slovakia, found bugs in three different hubs dangerous enough to trigger remote code execution, data leaks and Man-in-the-Middle attacks, according to a recent account from ZDNet.  The hubs were: the Fibaro Home Center Lite, eQ-3’s Homematic Central Control Unit (CCU2) and ElkoEP’s eLAN-RF-003.

The issues were reported to the vendors, and ESET did some follow up evaluation later. “Some of the issues appear to have been left unresolved, at least on older generations of devices,” ESET stated in its report. “Even if newer, more secure generations are available, though, the older ones are still in operation […] With little incentive for users of older-but-functional devices to upgrade them, they [users] need to be cautious, as they could still be exposed.”

The smart home hub vulnerabilities exist in devices that are poised for dramatic growth. According to IDC, the global market for smart home devices is expecting 26.9% growth in 2019, amounting to 832.7 million shipments. Growth of 17% is expected through 2023.

Revenue generated by the smart home market was estimated to be some $74 billion in 2019, with the US leading at $25 billion in sales projected for 2020, according to Statista. US penetration of smart homes is projected to grow from 18.5% in 2020 to 52% by 2023, the researchers estimate.

This growth means homes will be equipped with enough connected devices to rival the number of connections in a mid-sized company. Updates, passwords and settings will have to be managed, without the support of an IT security team or enterprise-level security tools, suggested a recent account from the IoT Security Foundation.

“This is where artificial intelligence and machine learning can come to the rescue,” the report states. “As is the case in many industries and niches, machine learning is complementing human effort and making up for the lack of human resources.”

AI Looks for Patterns in Device Communication to Flag Unusual Activity

AI is specifically adept at finding and establishing patterns, when it’s fed huge amounts of data, which is plentiful from IoT devices.

For example, a network traffic monitoring system can be applied to interactions between devices, to find attacks that might be past the outer perimeters. While the IoT is heavy with machine-to-machine (M2M) traffic, since device functionality and interaction is limited per device, the devices engaging in abnormal exchanges can be singled out. They may be compromised.

The common denominator of AI-based endpoint solutions that can outsmart malware is that they are very lightweight and use pattern-based approaches to deal with threats.

Researchers with consultancy Black Marble last year reported finding three vulnerabilities in two smart hubs made by Zagreb from Zipato of Croatia, according to an account in Bank Info Security. The researchers tried to see if they could unlock a door remotely without prior access, and take data off a single controller that could be leveraged to open other doors. They also searched for vulnerabilities that might allow for unlocking a door on the same network as the controller.

The researchers accomplished two of the three tasks and reported the third was very likely possible given enough time. The researchers reported the results to the vendor, which was reported to have addressed the software issues in a timely manner.

Zipato says it has 112,000 devices in 20,000 households across 89 countries; the company is not sure how many serve as smart home hubs.

Hackers Look for Data of Value Wherever They Can Find It

Hackers target smart home hubs for the potential to retrieve passwords and other data they can use for further exploits.

“All of these smart devices are really networked computers in addition to what they traditionally are: refrigerators, light bulbs, televisions, cat litter boxes, dog feeders, cameras, garage door openers, door locks,” stated Professor Ralph Russo, Director of Information Technology Programs at Tulane University, in an account from Safety. ”Many of these continually collect data from embedded sensors. Malicious actors could gain access to your home network through your device if they can exploit an IoT device vulnerability.”

Professor Ralph Russo, Director of Information Technology Programs, Tulane University

Devices seen most at risk are outdoor devices with embedded computers that support little or no security protocols, such as garage door openers, wireless doorbells and smart sprinklers. Next most vulnerable are devices inside the home that can be controlled through an app from a smartphone or PC, such as smart bulbs and switches, security monitors, smart door locks and smart thermostats.

“These devices rely on weak security tokens and may be hacked due to weaknesses in the communication protocols used, configuration settings or vulnerable entry-points left open by the vendor for maintenance,” stated Dr. Zahid Anwar, an associate professor at Fontbonne University, St Louis.

Dr. Zahid Anwar, Associate Professor, Fontbonne University, St Louis

Some hackers intend to take over smart home devices as part of a plan to build a network of bots, which could be used for example to orchestrate a large-scale capture of personal data, suggested Maciej Markiewicz, Security CoP Coordinator & Sr. Android Developer at Netguru, a digital consultancy and software company. “These devices can be easily used to conduct more complex attacks. Very often, the management of such botnet networks consisting of smart home devices are sold on the darknet to be used in crimes,” he stated.

Suggestions to protect smart home hubs include: assessing whether the risk of connection is worth the benefit, secure the Wi-Fi network router, use a password manager, register devices with the manufacturers in order to get the software updates, consider a professional installation and unplug the devices not in use.

Read the source articles in  ZDNet, at the IoT Security Foundation, in Bank Info Security and in Safety.

Coping With A Potential Mobility Frenzy Due To AI Autonomous Cars

By Lance Eliot, the AI Trends Insider

Walk or drive?

That’s sometimes a daily decision that we all need to make.

A colleague the other day drove about a half block down the street from his office, just to get a coffee from his favorite coffee shop.

You might assume that foul weather prompted him to use his car for the half-block coffee quest rather than hoofing the distance on foot.

Nope, there wasn’t any rain, no snow, no inclement weather of any kind.

Maybe he had a bad leg or other ailments?

No, he’s in perfectly good health and was readily capable of strutting the half-block distance.

Here in California, we are known for our car culture and devotion to using our automobiles for the smallest of distances. Our motto seems to be that you’d be foolhardy to walk when you have a car that can get you to your desired destination, regardless of the distance involved.

Numerous publicly stated concerns have been raised about this kind of mindset.

Driving a car when you could have walked is tantamount to producing excess pollution that could have been otherwise avoided. The driving act also causes the consumption of fuel, along with added wear-and-tear on the car and the roadway infrastructure, all of which seem unnecessary for short walkable trips.

And don’t bring up the obesity topic and how valuable walking can be to your welfare, it’s a point that might bring forth fisticuffs from some drivers that believe fervently in using their car to drive anyplace and all places, whenever they wish.

One aspect that likely factored into his decision was whether there was a place to park his car, since the coffee shop was not a drive thru.

We all know how downright exasperating it can be to find a parking spot.

Suppose that parking never became a problem again.

Suppose that using a car to go a half-block distance was always readily feasible.

Suppose that you could use a car for any driving distance and could potentially even use a car to get from your house to a neighbor’s home just down the street from you.

Some of us, maybe a lot of us, might become tempted to use cars a lot more than we do now.

In the United States, we go about 3.22 trillion miles per year via our cars. That’s though based on various barriers or hurdles involved in opting to make use of a car.

Here’s an intriguing question: If we had true self-driving cars available, ready 24×7 to give you a lift, would we become more enamored of using cars and taking many more short trips?

Think of the zillions of daily short trips that might be done via car use.

Add to that amount the ease of going longer distances than today you might not do, perhaps driving to see your grandma when you normally wouldn’t feel up to the driving task.

The 3.22 trillion miles of car usage could jump dramatically.

It could rise by say 10% or 20%, or maybe double or triple in size.

It could generate an outsized mobility frenzy.

Let’s unpack the matter and explore the implications of this seemingly uncapped explosion of car travel.

For the grand convergence leading to the advent of self-driving cars, see my discussion here: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/

The emergence of self-driving cars is like trying to achieve a moonshot, here’s my explanation: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

There are ways for a self-driving car to look conspicuous, I’ve described them here: https://aitrends.com/ai-insider/conspicuity-self-driving-cars-overlooked-crucial-capability/

To learn about how self-driving cars will be operated non-stop, see my indication here: https://aitrends.com/ai-insider/non-stop-ai-self-driving-cars-truths-and-consequences/

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional cars, so it’s unlikely to have much of an impact on how many miles we opt to travel.

For semi-autonomous cars, it is equally important that I mention a disturbing aspect that’s been arising, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the car, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Distances Traveled

For Level 4 and Level 5 true self-driving cars, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

For those of you that use ridesharing today, you’ll be joined by millions upon millions of other Americans that will be doing the same, except there won’t be a human driver behind the wheel anymore.

Similar to requesting a ridesharing trip of today, we will all merely consult our smartphone and request a lift. The nearest self-driving car will respond to your request and arrive to pick you up.

Some believe that we’ll have so many self-driving cars on our roads that they’ll be quick to reach you.

Furthermore, these driverless cars will be roaming and meandering constantly, awaiting the next request for a pick-up, and thus will be statistically close to you whenever you request a ride.

Nobody is sure what the cost to use self-driving cars will be, but let’s assume for the moment that the cost is less than today’s human-driven ridesharing services. Indeed, assume that the cost is a lot lower, perhaps several times less than a human-driven alternative.

Let’s put two and two together.

Ubiquitous driverless cars, ready to give you a lift, doing so at a minimal cost, and can whisk you to whatever destination you specify.

The AI that’s driving the car won’t berate you for going a half-block.

No need to carry on idle chit chat with the AI.

It’s like going for a ride in a chauffeur-driven car, and you are in full command of saying where you want to go, without any backlash from the driver (the AI isn’t going to whine or complain, though perhaps there will be a mode that you can activate if that’s the kind of driving journey you relish).

This is going to spark induced demand on steroids.

Induced demand refers to suppressed demand for a product or service that can spring forth once that product or service becomes more readily available.

The classic example involves adding a new lane to an existing highway or freeway. We’ve all experienced the circumstance whereby the new lane doesn’t end-up alleviating traffic.

Why not?

Because there is usually suppressed demand that comes out of the woodwork to fill-up the added capacity. People that before were unwilling to get onto the roadway due to the traffic congestion are bound to think that the added lane makes it viable to now do so, yet once they start to use the highway it ends-up with so much traffic that once again the lanes get jammed.

With the advent of driverless cars, and once the availability of using car travel enters a nearly friction-free mode, the logical next step is that people will use car travel abundantly.

All those short trips that might have been costly to take or might have required a lot of waiting time, you’ll now be able to undertake those with ease.

In fact, some believe that self-driving cars could undermine micro-mobility too.

Micro-mobility is the use of electric scooters, shared bikes, and electric skateboards, which today are gradually growing in popularity to go the “last mile” to your destination.

If a driverless car can take you directly to your final destination, no need to bother with some other travel option such as micro-mobility.

How far down this self-driving car rabbit hole might we go?

There could be the emergence of a new cultural norm that you always are expected to use a driverless car, and anyone dumb enough or stubborn enough to walk or ride a bike is considered an oddball or outcast.

Is this what we want?

Could it cause some adverse consequences and spiral out-of-control?

For info about self-driving cars as a form of Personal Rapid Transit (PRT), see my explanation here: https://aitrends.com/ai-insider/personal-rapid-transit-prt-and-ai-self-driving-cars/

On the use of self-driving cars for family vacations, see my indication: https://aitrends.com/ai-insider/family-road-trip-and-ai-self-driving-cars/

In terms of idealism about self-driving cars, here’s my analysis: https://aitrends.com/ai-insider/idealism-and-ai-self-driving-cars/

A significant aspect will be induced demand for AI autonomous cars, which I explain here: https://aitrends.com/ai-insider/induced-demand-driven-by-ai-self-driving-cars/

Mobility Frenzy Gets A Backlash

Well, it could be that we are sensible enough that we realize there isn’t a need to always use a driverless car when some alternative option exists.

Even if driverless cars are an easy choice, our society might assert that we should still walk and ride our bikes and scooters.

Since driverless cars are predicted to reduce the number of annual deaths and injuries due to car accidents, people might be more open to riding bikes and scooters, plus pedestrians might be less worried about getting run over by a car.

Futuristic cities and downtown areas might ban any car traffic in their inner core area. Self-driving cars will get you to the outer ring of the inner core, and from that point, you’ll need to walk or use a micro-mobility selection.

From a pollution perspective, using today’s combustion engine cars is replete with lots of tailpipe emissions. The odds are that self-driving cars will be EV’s (Electrical Vehicles), partially due to the need to have such vast amounts of electrical power for the AI and on-board computer processors. As such, the increased use of driverless cars won’t boost pollution on par with gasoline-powered cars.

Nonetheless, there is a carbon footprint associated with the electrical charging of EV’s. We might become sensitive to how much electricity we are consuming by taking so many driverless car trips. This could cause people to think twice before using a self-driving car.

Conclusion

Keep in mind that we are assuming that self-driving cars will be priced so low on a ridesharing basis that everyone will readily be able to afford to use driverless cars.

It could be that the cost is not quite as low as assumed, in which case the cost becomes a mitigating factor to dampen the mobility frenzy.

Another key assumption is that driverless cars will be plentiful and roaming so that they are within a short distance of anyone requesting a ride.

My colleague would have likely walked to the coffee shop in a world of self-driving cars if the driverless car was going to take longer to reach him than the time it would take to just meander over on his own.

And, this future era of mobility-for-all is going to occur many decades from now, since we today have 250 million conventional cars and it will take many years to gradually mothball them and have a new stock of self-driving cars gradually become prevalent.

Are self-driving cars going to be our Utopia, or might it be a Dystopia in which people no longer walk or ride bikes and instead get into their mobility bubbles and hide from their fellow humans while making the shortest of trips?

The frenzy would be of our own making, and hopefully, we could also deal with shaping it to ensure that we are still a society of people and walking, though I’m sure that some will still claim that walking is overrated.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Director of National Intelligence Releases Six Principles for Ethical AI

By AI Trends Staff

This week, the Office of the Director of National Intelligence (ODNI) released the first of an evolving set of principles for the ethical use of AI. The six principles, ranging from privacy to transparency to cybersecurity, were described as Version 1.0 and were approved by John Ratcliffe, Director of National Intelligence.

The six principles are positioned as a guide for the nation’s 17 intelligence agencies, especially to help them work with private companies contracted to help the government build systems incorporating AI, according to an account in Breaking Defense. The intelligence agency principles complement the AI principles adopted by the Pentagon earlier this year.

“These AI ethics principles don’t diminish our ability to achieve our national security mission,” stated Ben Huebner, head of the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.”

Ben Huebner, head of the Office of Civil Liberties, Privacy, and Transparency, ODNI

Feedback on the intelligence community’s AI principle is welcome, its managers say. “We are absolutely welcoming public comment and feedback on this,” stated Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there are aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some things that we are doing.”

Dean Souleles, Chief Technology Advisor for ODNI, said the “science of AI is not 100% complete yet,” but the ethics documents give intelligence officials a current roadmap of how best to use this emerging technology.

“It is too early to define a long list of do’s and don’ts. We need to understand how this technology works, we need to spend our time under the framework and guidelines that we’re putting out to make sure that we’re staying within the guidelines. But this is a very, very fast-moving train with this technology,” stated Souleles, in an account in Federal News Network.

Feedback is Welcome

The intelligence community expects to release updates to its AI documents as the technology evolves and it responds to questions. One of the issues being considered by the intelligence agencies is – what is the role of the “human-in-the-loop,” in the parlance of the DoD. For example, if a voice-to-text application has been trained in a dialect from one region of the world, even if it is the same language, how accurate it is? “That’s something I need to know about,” stated Huebner.

Feedback to the intelligence agencies on their AI principles is likely to derive from examples in the private sector. “We think there’s a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” stated Souleles.

Dean Souleles, Chief Technology Advisor, ODNI

For example, in trying to identify the source of a threat, examples from business could be helpful to the intelligence community, which is both reassuring and ominous.

“There are many areas we’re going to be able to talk about going forward, where there’s overlap that does not expose our classified sources and methods,” stated Souleles, “because many, many, many of these things are really common problems.”

A major concern with AI, no matter who is developing it, is bias in algorithms, according to an account in C4Isrnet. The framework suggests steps for practitioners to take to discover undesired biases that may enter algorithms throughout the life cycle of an AI program.

“The important thing for intelligence analysts is to understand the sources of the data that we have, the inherent biases in those data, and then to be able to make their conclusions based on the understanding of that,” Souleles stated. “And that is not substantially different from the core mission of intelligence. We always deal with uncertainty.”

Here are the six principles, in the document’s own words:

Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.

Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.

Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias.

Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties.

Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence.

Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.

Read the source articles in Breaking Defense, Federal News Network and C4Isrnet. Read the intelligence community’s AI principles at Intel.gov.

Thursday, 30 July 2020

The Role of AI in the Growing Smart Homes Trend

Smart technology is a growing trend. These devices help you navigate your everyday life — whether you're getting directions or locking your doors from your phone while on the go. When you put these innovations together, you get a smart home. In these homes, artificial intelligence (AI) plays a significant role in bringing "smartness" to the next level.The Rise of Smart HomesTechnology has been growing on somewhat of an exponential curve in the last decade. With the rise of smartphones, tech has taken the world to new and groundbreaking places. Lately, there's been an increasing trend with residential households being "smart" — which experts predict will surpass 300 million homes in 2023. Technology that only did one or two jobs is now going above and beyond to cater to personal needs and preferences to create more energy-efficient homes that have better security and health features like monitoring filtered water and air quality. Two factors contribute to this rise of smart tech — AI and the Internet of Things (IoT). AI puts automation on a learning curve where devices adapt to your needs and constantly evolve. IoT, on the other hand, connects everything to the internet. It provides a network of ...


Read More on Datafloq

Does Ethical use of AI is the only way forward!

Artificial Intelligence or AI is the technology that is used to make intelligent machines. AI is used specifically for intelligent computing solutions. Artificial Intelligence is all about making the tools to mimic human intelligence. With the help of the AI-based tools, the users can perform similar activities as that of humans. The AI-based programs are meant to mimic human intelligence. Although, artificial intelligence is not only limited to mimicking human intelligence. Artificial Intelligence is not restricted to biologically observable techniques. Instead, the experts keep evolving AI to make newer and more useful AI-based tools and programs. Though the use of AI is increasing day by day, only the ethical use of AI will turn out to be beneficial for the world. Superb benefits of artificial intelligence Artificial Intelligence empowers computers to mimic human intelligence. The AI-based tools are made using machine learning, deep learning, logic, decision trees, etc. Artificial Intelligence tools are used across the globe. AI is not only being used by the corporates but, it is also used in various industries and domains, like the field of medicines, astronomy, etc. The technology has the potential to transform the way the world works, it will turn out to be ...


Read More on Datafloq

Wednesday, 29 July 2020

6 Skills to Become a Master Data Scientist

Data science has grown by leaps and bounds in the past few years. According to Research & Markets, nearly 90% of business professionals say data and analytics will be a key part of their digital transformation, so it’s not surprising that there is a constant demand for data science professionals across the industry. Not to mention, the good paycheck, promising growth, and consistently challenges at work make data science career extremely fulfilling. However, what’s challenging, is becoming a data scientist. A data scientist needs an extensive set of skills. Below are the skills that you need to get started in data science. Mathematics StatisticsProgrammingMachine Learning and
Advanced Machine Learning (Deep Learning)Data VisualizationBig DataData IngestionData MungingTool BoxData-Driven Problem
SolvingLet’s delve into each skill one by one. 1. Mathematics A common assumption among aspirants is if you’re not good at maths, you will not be a good data scientist. Mathematics is a crucial part of data science, but it’s not the only skill to master. Strong skills mathematics builds a strong foundation for data science. The major concepts you need to learn are: Matrices and Linear Algebra FunctionsHash ...


Read More on Datafloq

What is the Future of Online Payments?

Electronic bill presentment and payment have made ecommerce faster, easier and more secure for both the seller and the buyer. The millennial population is looking for a redefined online payment solution. There are different companies offering electronic bill presentment and payment solutions. Every company wants to develop a payment solution for the future and be the first to enter the market with its solution. If you have been receiving online payments for the past couple of years, you already know that e commerce does not stand still. You and other online sellers are always in search of a payment solution that delivers better customer experience. Payment processing companies and electronic bill presentment and payment solution providers work relentlessly to provide solutions that are beneficial for both the online seller and buyer. They explore the scope of online payment to identify the trends that can change the future of electronic payments. Mobile Wallets The percentage of users using mobile wallets is expected to be 64% by the end of 2020. More than 23% of online buyers are willing to abandon mobile banking apps for mobile wallets. Wallet payments are expected to reach $503 billion by 2020. This highlights mobile ...


Read More on Datafloq

Industrial IoT Analytics with DeepIQ DataStudio and Databricks

A recent survey by Bain & Company shows that more than 30% of industrial companies are concerned about Information Technology and Operations Technology (IT&OT) integration. Another recent report by McKinsey & Company states that 70% of companies are still in “Pilot Purgatory” mode with industrial analytics projects, such as the use of the industrial internet of things (iiot) and industrial iot devices to reduce costs and/or improve operational efficiency. Clearly, the implementation of industrial analytics is not a trivial task – whether it’s IT&OT data integration or building machine learning (ML) models for predictive maintenance or asset optimization purposes.

DeepIQ DataStudio is a self-service analytics tools for industrial users and IoT applications that is powered by Databricks to make building analytic pipelines on IT&OT simple. With DataStudio running natively on Databricks, you can:

  • Build and deploy sophisticated analytics pipelines in minutes with no programming needed
  • Use the cloud’s distributed compute capability for 50x performance gains using Databricks
  • Auto-scale storage and compute independently for industrial data volumes: from KB to PB
  • Leverage a rich library of native analytics for OT data to build accurate predictive models
  • Ingest, process and analyze any operational data source using built-in connectors such as:
    • Historians and SCADA systems (e.g. DeepIQ ISV partner OSIsoft PI and Asset Framework)
    • Relational data sources (e.g. AP Plant Maintenance (PM) and SAP Manufacturing Intelligence and Integration (SAP MII)
    • Geo-spatial data

The user-friendly, drag-n-drop functionality, coupled with built-in sophisticated mathematical functions, empower you to manage your data at ease – from data cleansing to merging of multiple data streams to data processing to building supervised and unsupervised machine learning ( ML) models.

The Manufacturing Industry Use Case: Improving the Life of Industrial Dryers

Industrial dryers are routinely used by many industries including chemical, food and beverage, paper and pulp, agriculture, and plastics. Like any other processing equipment, dryers need to be maintained to alleviate unexpected failures which could lead to significant losses. Predictive maintenance programs can help you reduce your OPEX by maintaining the equipment based on the actual conditions of various components of the dryer. The common failure components for the industrial dryer are trunnion wheels and shafts, drum tires, trunnion bearings and seals.

Predictive maintenance programs utilizing the IIoT and artificial intelligence can help you reduce your OPEX by maintaining the equipment based on the actual conditions of various components of the dryer.

Figure 1: A Typical Industrial Dryer

In this article, we present the predictive maintenance of trunnion bearings using ML models on enriched sensor data.

  • We begin ingesting historical time-series sensor readings from an OSIsoft PI System in a scalable, reliable Delta storage format.
  • We then enrich our machine-to-machine sensor readings with maintenance report data pulled from an SAP Plant Maintenance system using simple drag-and-drop pipelines running on Databricks.
  • Lastly, we analyze the data, identify anomalies and build a predictive maintenance ML model to detect failures before they occur.

Step 1: Data Consolidation

We start by connecting the Pi System’s Asset Framework (AF) server with DataStudio and ingest all the time-series tags for the dryer of interest into the Delta Lake. The drag-and-drop interface of DataStudio makes it easy to create powerful data ingestion and consolidation pipelines.

IIoT DeepIQ DateStudio workflow: SAP BAPI and OSIsoft PI to Delta Lake

Figure 3: DataStudio workflow: SAP BAPI to Delta Lake

Historic time-series data sets are massive and can be prohibitively expensive to store in traditional relational databases. Delta Lake is an open source storage format that resides on and augments the capabilities of cloud object storage (e.g. Azure Data Lake or Amazon S3) by providing a unified, ACID-compliant, extremely fast storage location for streaming and batch big data sources. It is the recommended storage format for ingesting, processing and training models against time-series data.

With DataStudio, querying OSIsoft PI AF is an easy task. Once the AF server details are configured, we just need to specify the root element of the asset, the tags and the time range of interest – DataStudio handles the remaining complex tasks. Let’s collect the data for tags that measure solid and liquid rates, ambient temperature and humidity, and the dryer rotation rate for the last 5 years for all bearings of a dryer into Delta Lake. Since the bearing vibrations are available at one hertz frequency, each bearing will have over 150 million values! Many of our customers ingest hundreds of thousands of tags generating PB of data. DataStudio achieves this scale by running natively on an auto-scaling Databricks cluster.

Similarly, let’s extract the dryer failure dates from SAP PM. In DataStudio, we provide the parameters of the SAP BAPI we want to query, and the data is made available to us.

Step 2: Data Processing

Using DataStudio’s data visualization tool, we notice that there are few unexpectedly high and low values and possibly missing readings that are auto filled by the SCADA system. These outliers can be filtered out using a MAD outlier’s algorithm in DataStudio.

Deep IQ’s DataStudio data visualization tool allows you to identify outliers that can be then filtered out using a MAD outlier algorithm.

(a) Before outlier removal

Deep IQ’s DataStudio data visualization tool allows you to identify outliers that can be then filtered out using a MAD outlier algorithm.

(b) After outlier removal

Figure 4: Outlier removal time series (a) before and (b) after

We now notice extremely high frequency noise that obfuscates some of the signal in your sensor readings. Let’s run an exponential smoothing algorithm to filter out the frequency noise.

Deep IQ’s DataStudio data visualization tool allows you to identify high frequency noise that can be then filtered out using a smoothing  algorithm.

Figure 5: Smoothed data

We can now overlay the failure dates from SAP on our time-series data to see if any of the univariate signals have a direct signature of failure.

Step 3: Data Analytics

We use an auto-generative neural network to map the data into low dimensions and look at the failure dates vs time-series plots again. One of the hidden dimensions looks to have a strong failure signal because it is showing a significant drop before failure.

Deep IQ’s DataStudio allows you to visualize, for example, the results of an auto-generative neural network used to map IIoT data into low dimensions and overlay it with failure dates for analysis.

Figure 6: Encoded feature plot from an auto-generative neural network model, overlaid with failure date

By verifying the presence of this trend before other failures, we understand the true negative rate. Many failures show this degradation at least 2 days prior to catastrophic failure. To improve our analysis, let’s develop a supervised machine learning model to predict the failure.

Step 4: Machine Learning

We notice that around 2 days before the failure, the residuals for the encoded liquid rate start rising significantly higher. To detect failures with lead time, we will train an ML model to predict the values of this tag using the other tags pulled in as features under normal operating conditions. When the predicted value of our tag 2 days into the future is outside normal operating conditions, we can fire off an alert for a predicted failure.

Using a supervised IIoT ML model, a manufacturer can, for example, predict the failure of a mechanism and generate intelligent alerts to prevent a costly downtime or service interruption.

Figure 7: Anomaly score (residuals)

Step 5: Model Deployment and Monitoring

We can now execute the ML model in a batch or streaming mode to generate intelligent alerts by simply adding it to our pipeline in DataStudio. The alerts are based on deviation from the expected value for the current operating conditions.

As new data is generated; the analytic workflows will continue to monitor model performance against them using the metrics we define. The training workflows can be scheduled to retrain your model on a regular basis to ensure that your models are up to date with the latest failure data.

Finally, any visualization software such Spotfire, Tableau or PowerBI can be used to visualize actionable insights in near real time.

Next Steps: Get Started

We have shown the ease of use of building data ingest, cleansing, processing and analytics pipelines using DataStudio on Databricks. With native integrations to Delta Lake, DataStudio offers peta-byte scale data pipelines and machine learning. Look for an upcoming Databricks webinar where we show DataStudio in action!

If you would like additional information about this Blog post, or would like to start a pilot project, please visit https://deepiq.com or reach out to  info@deepiq.com.  You can follow us on LinkedIn at https://www.linkedin.com/company/deepiq16.

--

Try Databricks for free. Get started today.

The post Industrial IoT Analytics with DeepIQ DataStudio and Databricks appeared first on Databricks.

Tuesday, 28 July 2020

Machine Learning Plugin project - Coding Phase 2 blog post

jenkins gsoc logo small

Welcome back folks!

This blog post is about my coding phase 2 in Jenkins Machine Learning Plugin for this GSoC 2020. After successfully passing the evaluation and demo in the phase 1, our team went ahead for facing the challenges in phase 2.

Summary

This phase of coding was well spent by documentation and by fixing many bugs. As the main feature of connecting to an IPython Kernel is done in phase 1, we were able to focus on fixing minor/major bugs and documenting for the users. According to the JENKINS-62927 issue, a Docker agent was built to facilitate users without concerning plugin dependencies in python. In the act of deprecation of Python 2, we ported our plugin to support Python 3. We have tested our plugin in Conda, venv and Windows environments. Machine learning plugin has successfully passed the end to end test. A feature for a code editor is needed for further discussion/analysis as we have done a simple editor that may be useful in other ways in the future. PR#35

Main features of Machine Learning plugin

  • Run Jupyter notebook, (Zeppelin) JSON and Python files

  • Run Python code directly

  • Convert Jupyter Notebooks to Python and JSON

  • Configure IPython kernel properties

  • Support to execute Notebooks/Python on Agent

  • Support for Windows and Linux

Upcoming features

  • Extract graph/map/images from the code

  • Save artifacts according to the step name

  • Generate reports for corresponding build

Future improvements

  • Usage of JupyterRestClient

  • Support for multiple language kernels

    • Note : There is no commitment on future improvements during GSoC period

Docker agent

The following Dockerfile can be used to build the Docker container as an agent for the Machine Learning plugin. This docker agent can be used to run notebooks or python scripts.

Dockerfile
FROM jenkins/agent:latest

MAINTAINER Loghi <loghijiaha@gmail.com>

USER root

RUN apt update && apt install --no-install-recommends python3 -y \
    python3-pip \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt /requirements.txt

RUN pip3 install --upgrade pip setuptools && \
    pip3 install --no-cache-dir -r /requirements.txt && \
    ln -sf /usr/bin/python3 /usr/bin/python && \
    ln -sf /usr/bin/pip3 /usr/bin/pip

USER jenkins

Ported to Python 3

As discussed in the previous meeting, we concluded that the plugin should support Python 3 as Python 2.7+ has been deprecated since the beginning of 2020. Pull request for docker agent should be also ported to Python 3 support.

Jupyter Rest Client API

The Jupyter Notebook server API seemed to be promising that it can be also used to run notebooks and codes. There were 3 api implementations that were merged in the master. But we had to focus on what was proposed in the design document and had to finish all must-have issues/works. Jupyter REST client was left for future implementation. It is also a good start to contribute to the plugin from the community.

Fixed bugs for running in agent

There were a few bugs related to the file path of notebooks while building a job. The major problem was caused by the python dependencies needed to connect to a IPython kernel. All issues/bugs were fixed before the timeline given.

R support as a future improvement

This is what we tried to give a glimpse of knowledge that this plugin can be extended for multi language support in the future. There was a conclusion that the kernel should be selected dynamically using extension of the script file(like eval_model.rb or train_model.r), instead of scripting the same code for each kernel.

Documentation and End to End testing

A well explained documentation was published in the repository. A guided tutorial to run a notebook checked out from a git repo in an agent was included in the docs page. Mentors helped to test our plugin in both Linux and Windows.

Code editor with rebuild feature

Code editor was filtered as a nice to have feature in the design document. After grabbing the idea of Jenkinsfile replay editor, I could do the same for the code. At the same time, when we are getting the source code from git, it is not an elegant way of editing code in the original code. After the discussion, we had to leave the PR open that may have use cases in the future if needed.

Jenkins LTS update

The plugin has been updated to support Jenkins LTS 2.204.1 as 2.164.3 had some problems with installing pipeline supported API/plugin

Installation for experimental version

  1. Enable the experimental update center

  2. Search for Machine Learning Plugin and check the box along it.

  3. Click on Install without restart

The plugin should now be installed on your system.

Should You Work as an Outsourced Employee?

Outsourcing is the shifting of non-core business functions from in-house to an outside third-party company or contractor. Given how widespread it’s becoming, you may find yourself wondering if a career as an outsourced employee could be right for you.

When it was initially introduced, the practice of outsourcing focused on lower-level functions like data entry and payroll. However, it has expanded to include higher-level functions like research and various IT services (see Bairesdev). This diversity of skills makes being an outsourced worker a realistic choice for a long-term career.

Job Stability

The stability of the outsourcing industry is demonstrated by its growth. The global outsourcing market grew from $45.6 billion in 2000 to $92.5 billion in 2019. This increase is largely driven by the cost savings, increased efficiencies, and access to expertise and advanced technology that companies achieve through the use of this staffing strategy.

Those benefits aren’t likely to change in the future, so working for an outsourcing company could become a long-term career with real opportunities for advancement and promotion as outsourcing continues to expand.

While many have the opportunity to choose whether to work as an outsourced employee from the outset, others may be pushed into the decision by their employer. If ...


Read More on Datafloq

The Top Metrics That Make a Difference for Software Development Teams

A software development project is sometimes a difficult thing to track in terms of meeting specific quantifiable goals. While business and project owners would undoubtedly say that they want to reach these goals, they fail to implement the necessary metrics into the software development process. Instead, the development process runs its course and the project may be completed but often at the cost of poor quality and many missed deadlines.

Why Use Software Development Metrics?

Software metrics are game-changers for both a software development company (such as BairesDev) and clients alike. Metrics empower teams with the ability to make the most efficient choices based on objective data instead of their own personal feelings. Metrics are similar to measurements but are more complex and require different measurements themselves.

In the software development process itself, metrics represent a set of quantifiable parameters and measurements that help teams track a variety of aspects during development. Not only do these metrics track overall team performance, but they also help determine the delivery speed, assess the accuracy of the product when compared to the initial requirements and expectations, define the amount of time spent in each phase of the process, and help teams refine their processes by pointing ...


Read More on Datafloq

Big tech goes on shopping spree, brushing off antitrust scrutiny

Big Tech is even buying amid a world economy ravaged by the coronavirus pandemic.

Leveraging AI Development Trends for Business Transformation

Artificial Intelligence has enabled organizations to achieve some massive breakthroughs in the past decade. From its use in the Supply Chain and ECommerce to Research and Development, artificial intelligence has brought about massive transformation changes in business workflow and has led to an increasing demand among firms for implementing AI-based solutions.As per Gartner, around 37 percent of organizations are using AI in some form making it a 270 percent increase compared to last year. Another stat from Gartner predicted that around 80 percent of the emerging technologies will be based on AI, clearly showing that AI implementation will be greatly responsible for business growth.Artificial Intelligence is used to simplify business activities in different industry verticals. From automating tasks with RPA to speeding up decision making with useful insight comings from Predictive Engines– AI offers tremendous capabilities for businesses. And thus, AI Development Companies worldwide have started integrating AI with other technologies like Blockchain, IoT, and others to create next-gen solutions.The scope of AI is massive and is predicted to be instrumental for business transformation. Let’s have a look at some of the major trends in AI Development for this year and beyond:Facial RecognitionAI is used in developing facial recognition ...


Read More on Datafloq

Chatbots and Intelligent Automation Solutions Paving the Way towards Seamless Business Continuity

Frequent business disruptions in the form of storms, pandemics, lockdowns, etc., pose risk to seamless operations and revenue generation in service industries. One day of operation disruption leads to losses worth millions. Semi-automation is not able to stop the cascading business effects of an unprecedented business disruption. Services such as banks, financial services, insurance, healthcare, information technology services, etc., cannot afford the risk of downtime. Chatbots powered by Intelligent Automation is that indispensable solution in the omni-channel customer interface that keeps the business moving 24x7 even in the face of a major business disruption such as long prevailing pandemic.How do Intelligent Automation powered Chat-bots offer seamless business continuity?


Chatbots engage diverse skill sets such as Robotic Process Automation (RPA) and Artificial Intelligence (AI) / Machine Learning (ML), in short Intelligent Automation, and offer a lifeline to the service industry businesses. Chatbots are located on the key pages of a business website or social media pages of the business, and can be accessed by customers and prospects round the clock in different international languages. They augment the services of the regular service desk and helps tide over most emergency situations.Chatbots can ...


Read More on Datafloq

Stiff competition awaits Reliance Jio as it takes 'made-in-India' 5G tech to the world

Rivals may have a first-mover advantage in developing technology, filing patents

Custom Distribution Service : Midterm Summary

Hello, After an eventful community bonding period we finally entered into the coding phase. This blog post will summarize the work done till the midterm of the coding phases i.e. week 6. If some of the topics here require a more detailed explanation, I will write a separate blog post. These blogs posts will not have a very defined format but would cover all of the user stories or features implemented.

Project Summary

The main idea behind the project is to build a customizable jenkins distribution service that could be used to build tailor-made jenkins distributions. The service would provide users with a simple interface to select the configurations they want to build the instance with eg: plugins, authorization matrices etc. Furthermore it would include a section for sharing community created distros so that users can find and download already built jenkins war/configuration files to use out of the box.

Quick review

Details

I have written separate blog posts for every week in GSoC and the intricate details for each of them can be found at their respective blog pages. I am including a summary for every phase supported with the respective links.

Community Bonding

This year GSoC had a longer community bonding than any of the previous editions due to the Coronavirus pandemic and therefore this gave me a lot of time to explore, so I spent it by building a prototype for my project. I realised some of the blockages I might face early on, and therefore it gave me more clarity in terms of how I can proceed. I also spent this time preparing a design document which you can find here.

Week 1

In week one, I spent time getting used to the tech stack I would be using, I was pretty familiar with Spring Boot but React was something I was going to be using for the first time, so I spent time studying more about it. I also got the project page ready, the issues I was going to tackle and the milestones that I had to achieve before the evaluation. I also spent a bit of time setting up the home page and a bit of front-end components.

Week 2

Once we were done with the initial setup, it was time to work on the core of the project. In the second week, I worked on generating the package configuration and the plugin list dummy display page setup. I also ran into issues with the Jenkinsfile so the majority of time was spent fixing it. Finally I managed to get around those problems. You can read more about it in the Week 2 Blog post.

Week 3

The last week was spent cleaning up most of the code and getting the remaining milestones in. This was probably the hardest part of phase 1 because it involved connecting the front and back end of the project.You can read more about it here.

Midterm Update

The second phase has been going on for the past 3 weeks and we have already accomplished a majority of the deliverables including community configurations, war downloading and filtering of plugins. More details about the mid term report can be found here.

Getting the Code

The Custom Distribution Service was created from scratch during GSoC and can be found here on Github.

Pull Requests Opened

38

Github Issues completed

36

Accessible Design: A Complete Guide for 2020

Accessible web design has slowly come to the forefront for businesses—but doing it right is still proving to be a challenge.Users representing myriad communities have been vocal about the lack of accessibility while accessing online resources and this is finally leading to change.No matter what business sphere one exists in, being able to reach more users by incorporating accessible design is good business practice.Not everyone accesses the online world in the same ways. Users with impairments—cognitive, hearing, motor, speaking, and visual—interact with sites differently.We share a complete guide to adopting accessible design practices in 2020.Simplifying Design


SourceSimplicity in design isn’t just important for accessibility—it also leans into the graphic design trend of minimalism that is taking over in 2020.The clearer the visual layout, the better the user experience—this is because it creates a definitive hierarchy of elements on the page.To create a simple and accessible design, one should group similar elements, depending on how they relate to each other in terms of size and color.Elements that follow one another should be placed in distinct hierarchies so users can easily understand how one section leads to another.The website design should ...


Read More on Datafloq

Jenkins 2.235.3: New Linux Repository Signing Keys

The Jenkins core release automation project has been delivering Jenkins weekly releases since Jenkins 2.232, April 16, 2020. The Linux repositories that deliver the weekly release were updated with new GPG keys with the release of Jenkins 2.232.

Beginning with Jenkins LTS release 2.235.3, stable repositories will be signed with the same GPG keys that sign the weekly repositories. Administrators of Linux systems must install the new signing keys on their Linux servers before installing Jenkins 2.235.3.

Debian/Ubuntu

Update Debian compatible operating systems (Debian, Ubuntu, Linux Mint Debian Edition, etc.) with the command:

Debian/Ubuntu
# wget -qO - https://pkg.jenkins.io/debian-stable/jenkins.io.key | apt-key add -

Red Hat/CentOS

Update Red Hat compatible operating systems (Red Hat Enterprise Linux, CentOS, Fedora, Oracle Linux, Scientific Linux, etc.) with the command:

Red Hat/CentOS
# rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key

Frequently Asked Questions

What if I don’t update the repository signing key?

Updates will be blocked by the operating system package manager (apt, yum, dnf) on operating systems that have not installed the new repository signing key. Sample messages from the operating system may look like:

Debian/Ubuntu
Reading package lists... Done
W: GPG error: https://pkg.jenkins.io/debian-stable binary/ Release:
    The following signatures couldn't be verified because the public key is not available:
        NO_PUBKEY FCEF32E745F2C3D5
E: The repository 'https://pkg.jenkins.io/debian-stable binary/ Release' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Red Hat/CentOS
Downloading packages:
warning: /var/cache/yum/x86_64/7/jenkins/packages/jenkins-2.235.3-1.1.noarch.rpm:
    Header V4 RSA/SHA512 Signature, key ID 45f2c3d5: NOKEY
Public key for jenkins-2.235.3-1.1.noarch.rpm is not installed

Why is the repository signing key being updated?

The original repository GPG signing key is owned by Kohsuke Kawaguchi. Rather than require that Kohsuke disclose his personal GPG signing key, the core release automation project has used a new repository signing key. The updated GPG repository signing key is used in the weekly repositories and the stable repositories.

Which operating systems are affected?

Operating systems that use Debian package management (apt) and operating systems that use Red Hat package management (yum and dnf) need the new repository signing key.

Other operating systems like Windows, macOS, FreeBSD, OpenBSD, Solaris, and OpenIndiana are not affected.

Are there other signing changes?

Yes, there are other signing changes, though they do not need specific action from users.

The jenkins.war file is signed with a new code signing certificate. The new code signing certificate has been used on weekly releases since April 2020.