Friday, 31 May 2019

Becoming a Jenkins contributor: Newbie-friendly tickets

Two months ago I published an introductory article on the journey of becoming a Jenkins contributor. In that first article, the jenkins.io site was reviewed, learning about the multiple ways in which we can participate and contribute. Then, a first—​basic—​contribution I made to the site repository was described.

Now, in this new article we will be exploring more advanced contributions, committing code to the actual Jenkins core.

Getting started with tickets and processes

Beginners guide to contributing and Jenkins Jira

Reviewing the developer section in jenkins.io is probably the best starting point, and a reference link to keep handy. The beginners guide to contributing to Jenkins can also be useful, since it points to different repositories, tools (such as the issue tracker) and governance documents. Besides, it describes best practices for commit messages, code style conventions, PR guidelines, etc.

Once we get a general understanding of the former and want to actually start coding, we may get stuck trying to come up with something to work on.

Visiting the Jenkins issue tracker, feels like the natural next step, since it is full of potential bugs and enhancements that have already been reported by the community. However, it is quite easy to feel overwhelmed by the possibilities listed there. Bear in mind that in a 10+-year-old project like this, most of the things that are reported are tricky for a newcomer to work on. For that reason, filtering by newbie-friendly tickets is probably the best idea.

list newbie tickets
Figure 1. Screenshot displaying the list of newbie-friendly tickets in the Jenkins Jira

Selecting a ticket

In my case, I spent some time reviewing the newbie-friendly tickets, until I found one that seemed interesting to me and also looked like something I would be able to fix:

jenkins newbie jira ticket selected
Figure 2. Screenshot of the ticket I decided to work on

Processes

At this stage, when we have decided to take ownership of a ticket, it’s a good practice to let the rest of the community know that we are planning to start working on it. We can do so easily, by assigning the ticket to ourselves (see the “Assign” button below the ticket summary).

Assigning the ticket to ourselves in the Jenkins Jira will allow any other contributors to know that we are planning to take care of the ticket; and in case they are also interested in contributing to it, they will know who to reach if they want to coordinate work or ask for status. That said, it is worth mentioning that assigning a ticket to yourself does not mean that other contributors cannot work on it from then onwards. Jenkins is an open-source project and anyone is welcome to create their own PRs, so anybody can propose their own solution to the ticket. But as you can guess, if the ticket is assigned to somebody, most people will probably reach the assignee before starting to work on it.

Related to the former, it is also important to bear in mind that we should not postpone work on the ticket for too long once we have assigned the ticket to ourselves. Other potential contributors might be ignoring the ticket because they see yourself assigned to it.

Once we are about to actually start working on the ticket, it is also a good practice to click the “Start Progress” button. This action will change the status to “In progress”, signaling to the community that we are currently working on this particular ticket.

Setting up the necessary stuff on our computer

Configuring, installing and testing

As described in the first article of this journey, the initial step to start contributing to a particular repository is to fork it to our GitHub account, and then clone it to our local computer.

As usual, in the Jenkins core repository the CONTRIBUTING file describes the necessary steps to get the repository working locally. This includes installing the necessary development tools: Java Development Kit (OpenJDK is the recommended choice), Maven and any IDE supporting Maven projects. Note that instructions to install JDK and Maven are linked in the contributing guidelines.

Once we have all the necessary tools installed and configured, we are ready to build Jenkins locally and also to run tests.

Getting down to business

Reviewing ticket details

Now that I was ready to start working on the ticket, I had to review it in more detail, to fully understand the problem.

The description of the ticket I was planning to work on included two links. The first one was to a screenshot that showed the actual bug. It showed how several non-compatible plugins were being selected when clicking “All”, even though the intended behavior was to only select the compatible plugins. The second link was a reference to a code fragment that showed other validations that had to be taken into account when checking if a plugin update was compatible or not with the current installation.

Reproducing the issue locally

Even though I had now understood the issue in better detail, I had not seen it myself live yet, so I seemed to me that the next logical step was to reproduce it locally.

To reproduce the issue locally in our computer, we can either use the local war file that we can generate by building Jenkins from the source code or we can download the latest Jenkins version available and run it locally. When I worked on this ticket, the latest available version was 2.172` and, when I built if from the sources I saw version 2.173-SNAPSHOT, which was the next version, in which the community was already working on.

In general it is a good idea to reproduce bugs locally, not only to get a better understanding, but also to make sure they are actual issues. It could always be an issue happening only on the reporter’s end (e.g. some user misconfiguration). Or it could be a ticket referencing an old issue that has been fixed already. This latest possibility didn’t sound that strange to me, since the ticket was one month old. It could have been handled by someone else in the meantime, without noticing the ticket existed. Or the contributor might have forgotten to update the ticket in the issue tracker after the fix was committed.

So, for all the reasons above, I ran the latest Jenkins version locally. From a terminal, I went to the folder in which the war file was placed, and ran java -jar jenkins.war, which starts Jenkins locally on http://localhost:8080.

From the home page I navigated to the Plugin Manager (clicking the “Manage Jenkins” link in the left hand side and then selecting “Manage Plugins” in the list).

In the Manage Plugins page, the list of plugin updates appears. In my case, since I re-used an old JENKINS_HOME from an older installation, several plugins showed up in the list, requiring updates. That allowed me to test the behavior that was supposed to be failing.

When I clicked on the “Select all” option at the bottom, this is what I got:

jenkins plugin manager updates selected bottom
Figure 3. Screenshot showing the error, reproduced locally, after clicking “Select All”

As it had been reported in the ticket, the behavior was inconsistent. In a previous version, the behavior of the “All” selector had been changed (with the best intent), aiming to only check the compatible plugins. However, as can be seen in the screenshot, the behavior was not the expected one. Now, neither “all” nor “only compatible” plugins were being selected, since some plugins with compatibility issues were also being checked, unintentionally.

Figuring out a fix

When reading the conversation in the original PR in which the behavior of the “All” selector had been changed, I saw a suggestion of having a separate “Compatible” selector, thus leaving the “All” selector with the traditional behavior. I liked the idea, so I decided to include it as part of my proposed change.

At this stage, I had a clear picture of the different things I needed to change. These included: 1) The UI, to add a new selector for “Compatible” plugins only, 2) the JS code that applied the changes to the interface when the selectors were clicked and 3) probably the back-end method that was determining if a plugin was compatible or not.

Applying the change

As usual, and as it is also recommended in the contributing guidelines, I created a separate feature branch to work on the ticket.

After reviewing the code, I spent some time figuring out which pieces I needed to change, both in the back-end and also in the front-end. For more details about the changes I had to make, you can take a look at the changes in my PR.

As a basic summary, I learned that the classic Jenkins UI was built using Jelly and, after understanding its basics, I modified the index.jelly file to include the new selector, assigning the function that checked the compatible plugins to this new selector, and re-using the existing “toggle” function to set all checkboxes to true in the case of “All”. I also had to modify the behavior of the checkPluginsWithoutWarnings JavaScript function, to un-check the incompatible ones, since there was now an actual “All” selector that was not there previously, and that un-check case was not being taken into account. Then, I created a new back-end Java method isCompatible, inside the UpdateSite.java class, which now calls all the different methods that check different compatibilities and returns the combined boolean result. For this change, I included an automated test to verify the correct behavior of the method, contributing to the test coverage of the project. Finally, I modified the table.jelly file to call the new back-end method from the UI, replacing the existing one that was not taking all cases into account.

As you can see, the change involved touching different technologies, but even if you face a similar situation in which you are not familiar with some of them, my advice would be carry on, don’t let that stop you. As software engineers, we should focus on our evergreen skills, rather than on knowing specific technologies; adapting to whatever framework we have to use at a given moment, learning whatever we need about the new technology to complete the task and applying cross-framework principles and best practices to provide a quality solution.

Result

After the changes described above, the resulting UI includes a new option, and the corresponding behaviors of the three selectors work as expected:

fixed select compatible
Figure 4. Screenshot of the new version, displaying the behavior achieved by clicking the new “Compatible” selector

Publishing the change

Submitting a Pull Request

In the contributing guidelines of the Jenkins core repository there is also a section about proposing changes, which describes the necessary steps that have to be followed in order to create a Pull Request (PR) with our change.

Furthermore, there is a PR template in the repository, which will be loaded automatically when creating a new PR and that will serve as a basis for us to provide the necessary information for the reviewers. We are expected to: include a link to the ticket, list the proposed changelog entries describing our changes, complete the submitter checklist and add mentions to the desired reviewers (if any).

In my case, I followed the template when creating my PR, completing all sections. I linked the Jira ticket, provided two proposed changelog entries, completed the submitter checklist and added three desired reviewers (explaining why I thought their reviews would be valuable). I also linked the original PR that was referenced in the ticket, to provide further context.

pr created
Figure 5. Screenshot of the PR I submitted

The approve and merge process

As stated in the contributing guidelines, typically two approvals are needed for the PR to be merged; and it can take from a few days to a couple of weeks to get them. Sometimes, one approval from a reviewer and a 1-week delay without extra reviews is considered enough to set the PR as ready-for-merge. However, both the time-to-merge and the number of approvals necessary might vary, depending on the complexity of the change or the area of Jenkins core that it affects.

After the necessary approvals have been received, a Jenkins core maintainer will set the PR as ready-for-merge, which will lead to it being merged into the master branch when one of the following releases are being prepared.

In my case, I received a review by Daniel (the reporter of the ticket and one of my “desired reviewers”) the very day I submitted the PR (April 14th). He made several very useful suggestions, which led to changes from my side. After those changes, Daniel made minor remarks and my PR got another review, which was its first approval. After a week had passed without further news, I added the remaining minor suggestions from Daniel and a few days later received another approval, to which Daniel’s final approval was added, leading the PR to be labeled ready-for-merge, being later merged the same day (April 26th).

pr merged
Figure 6. Screenshot of the final state of the PR, after being merged

Release

For every new release, repository maintainers will select a set of PRs that have already been labeled ready-for-merge, merge them to master, prepare changelogs (often using the suggestions included in the PRs by the authors) and proceed with the creation of the new release. There is no additional action required from Pull Request authors at this point.

Every week a new version of Jenkins is released, so when your PR is merged, your changes will—​most likely—​become part of the following weekly release of Jenkins.

Eventually, your changes will also reach the Long-term support (LTS) release, which is a different release line, aimed for more conservative users. This release line gets synced with the weekly release by picking, every 12 weeks, a relatively recent weekly release as baseline for the new LTS release. In between, intermediate LTS releases are created only to include important bug fixes, cherry-picked from the weekly releases. New features are typically delayed until the next baseline for the LTS release is defined.

Regarding the example described in this post, it was released in Jenkins 2.175 (weekly release), soon after being merged. And will probably be included in the next LTS, which should be released next month (June 2019).

Done!

And that’s it! We have now covered the whole lifecycle of a new proposed change to Jenkins core. We have reviewed the process from the very beginning, picking a ticket from the Jenkins issue tracker; all the way to the end, having our change released in a new Jekins version.

If you have never contributed but are willing to do so, I hope this article motivates you to go back to the list of newbie-friendly tickets, find one that looks interesting to you, and follow the steps described above, until you see your own change released in a new Jenkins version.

Remember, don’t try to solve a complicated issue as your first ticket, there are plenty of easier ways in which you can contribute, and every little helps!

AI System Kinship Relationships And Their Ramifications: The Case of Autonomous Cars

By Lance Eliot, the AI Trends Insider

If you look at history there is a lot of credence to the power and attraction of kinship.  For those of you that have been Game of Thrones fans, kinship was certainly a front-and-center theme throughout the entire series, though admittedly a fictional portrayal, but I dare say we might all agree that history showcases the same phenomena.

If you are a history buff, you likely know that the famous Heidelberg manuscript from the 13th century indicated that kin-blood is not spoiled by water. In more modern times, we’ve come to express this as the now-classic saying that blood is thicker than water. Whichever way you might prefer to state it, the underlying notion is that bloodline family and familial relationships are considered a very strong bond.

Indeed, some would assert that the familial bonds are stronger than any other kind of friendship or relationship that you might ever formulate. Family and bloodline prevails over anything else, in their view.

Humans have often had to make tough choices in life and at times seemed to choose the bloodline, even when it might have been more sensible to not do so. Animals seem to also have a familial tendency and you can readily watch online videos of wild animals that will take great chances to save or protect their own bloodline offspring. Kinship appears to be quite pervasive.

Many movies and stories abound about how someone got into hot water by giving a job to a bloodline relative that otherwise likely wasn’t qualified for the role. Today’s news covers instances of wealthy founders that control large-scale conglomerates and how they put their family connections over the top of any other business dealings. There seems to be a magnetic power that attracts and bonds by kinship.

It is difficult to clearly say why this kinship aspect matters so much. Why should the fact that someone else has the same bloodline as you make such a difference? One would assume that how people treat each other, and the other facets of human-to-human relationships, would determine the bonding type and level of formulation. Acts and deeds would presumably be the highest sign of creating a bond. But, somehow, we nonetheless still often resort back to the seemingly simplistic matter of bloodline.

Those that study the nature of evolution and abide by Darwin’s theories would suggest that it is rather apparent why the bloodline would be so revered. When you are faced with the basics of survival, you need to have others that can protect your back. By becoming a kind of pack, your chances of survival are presumably enhanced. The question then becomes whom can you or should you form a pack with? In a rudimentary caveman or cavewoman manner, it would have made sense to focus on your own bloodline.

One aspect is that your own bloodline would be a known more so than an unknown. Those with your bloodline were more likely to have similar characteristics as you. They would tend toward the same physical attributes and presumably similar cognitive and personality attributes. This would appear to make the connectivity of the pack a more likely magnetic or attracting element and promote cohesiveness. Strangers would be less likely to stick out their neck to help you, while your own type would perhaps be more willing to go the extra mile for your sake.

I’d wager that we all can see the logic of this bloodline calculation. Does it still make sense today, in a modern world? One might argue that it is a legacy carryover from the days of basic survival. It might not provide any much value or protection in a modern age. Or, perhaps it still does and in spite of your modern conveniences of life, the foundational familial bond still is essential. No matter how space age we become, it could be that the kinship rule will always still apply.

Even Plants Might Leverage Kinship

Here’s a twist for you: What about plants?

Do plants have a similar kinship aspect? Would a plant be willing to aid more so its own bloodline of plants, doing so over a non-bloodline plant?

I think that most of us would say it is preposterous to suggest that plants would have a kinship. Plants aren’t people. Plants aren’t animals. Plants are, well, they are plants. They don’t know what they are doing. They don’t know how to act towards others. At a quick glance, it would seem that plants cannot possibly abide by any kind of bloodline familial kinship. The idea itself is presumably outlandish.

There is a growing contingent of plant evolutionary ecologists that would claim you are wrong in your assumption about plants.

Over ten years ago, a researcher at McMaster University published a paper that there are plants that seem to exhibit kinship behaviors. Since then, there has been a back-and-forth of fellow researchers trying to further prove the contention and others that have tried to debunk it.

What kinds of research findings provide support for the plant kinship theory?

It has been suggested that plants will spread their roots when around non-bloodline plants and will tend to rein in their roots when nearby to bloodline plants. This willingness to narrow the reach of the roots is an indicator that the plant is keen to allow other nearby plants a greater chance to survive and extend their own roots. If the plant extends its roots, it is saying that the war is on and other nearby plants will need to compete with the root system of the spreading plant. Apparently, one might conclude, the plant is narrowing its roots to support kinships, while not narrowing its roots when surrounded by non-kinship relationships.

That does seem interesting. Is it a compelling case? Those opposed to the assertion that kinship is involved would say that there are other ways to explain the phenomena. It could be that the roots of a like matter are simply prohibiting the roots of the other plant to reach out. Meanwhile, roots of a different kind do not have the same mechanism and so the plant is spreading out into their territory. The claim of a form of causation based on kinship is misleading, some would say, and it is merely a matter of whether the plants are of a like type or not.

Here’s another plant example of potential kinship. Some studies show that a plant will shift the angle and direction of its leaves to reduce the impact of shadows being cast onto another nearby plant. This helps the nearby plant by ensuring that it gets more sunshine and nourishment therefore via the sun. And guess what? Yes, you probably guessed that supposedly plants of the same kinship were more likely to alter their leaves positioning, aiding other nearby plants of their own familial line. They tended to not do so for other non-kinship plants.

Further proof that plants form kinships? Maybe, maybe not.

These alleged kinship effects are being described as altruistic. An individual plant will appear to forego some of its own chances of survival to aid the chances of survival for its kin. One might say that the online videos of lions willing to attack hyenas that are attacking a kin are quite similar. It seems overly risky for a lion to enter into the fray and attempt to knock back the hyenas, doing so at great personal risk. They seem to do this as spurred by their kinship and bloodline effects.

A recent study of plants by researchers at the University of Lausanne indicated that after growing over 700 seedlings in pots, some of which were alone in a pot and others that had up to six nearby neighbors, the kinship plants in a pot apparently sprouted more flowers. This is a handy survival and thriving action because the more flowers there are then the more chances of alluring to the plants of pollinators. The more pollinators then the greater the chances of continuing the plant legacy.

To make those added flowers, each plant would need to consume a greater amount of energy and therefore be somewhat sacrificing itself for the greater collective good. The more crowded a pot was with kinship plants, the more the flowers were blooming.

Again, is this absolute proof of kinship in plants? You could argue that it does not provide irrefutable evidence for it. We might be back to the notion that some plants have inherent mechanisms that come to play when nearby other kinds of plants. Perhaps we could use other kinds of plants that aren’t of a kinship nature, but for which they have similar attributes, and maybe get the same results.

Indeed, there is a viewpoint that sometimes kinships draws kins together, while it does not draw-in non-kinships, or might even go further and attempt to undermine or repel non-kinships. In other words, we might fight for our kin. We might not go out of our way to fight for our non-kin. We might also fight to keep at bay our non-kin. This means we are overtly rejecting non-kin, rather than simply trying to accept or aid our own kin.

Throughout all of this, you might be asking yourself how in the world does one plant even know whether another plant is a kin or not a kin?

With humans, we can talk with each other, we can look at each other, we can touch each other, and otherwise use our various senses to try and figure out a kin versus a non-kin. You might say that lions can do the same, though perhaps not do the talking part (though, they can make vocalizations that would allow for a like-kind use of sounds as a communication medium).

Among the plant kinship believers, one idea is that plants emit chemical indicators and those chemotypes can be a signal to other plants. This might be emitted on the leaves, stalk, and via the roots of the plant, allowing for communicating above and below ground. It could be a kin recognition signal. Another idea is that it could be via light, since plants tend to have light-related sensors. All told, there are more ways for plants to possibly communicate than you might have at first considered.

Not everyone is on-board the plant kinship train. Disputes about the research designs are ongoing. Those that are proponents of the kinship theories are apt to also offer new ideas about how to exploit the feature. Knowing that a kinship exists, you could apparently leverage the capability to more readily regrow or regenerate a forest that has been undermined. In any case, the plant kinship debate will likely continue for a while and we’ll have to wait and see how it plays out.

AI Autonomous Cars And Kinship

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One interesting aspect will be how various AI self-driving cars act toward each other, for which there might be a kinship element involved.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •  Sensor data collection and interpretation
  •  Sensor fusion
  •  Virtual world model updating
  •  AI action planning
  •  Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the notion of kinship and familial relationships, let’s consider how this kind of element might come to play regarding the advent of AI self-driving cars.

Driving And Kinship Versus Non-Kinship With Other Drivers

Suppose you are driving your car on a busy highway. Your daughter is driving her car and is just ahead of you in traffic. You catch-up with her and realize that she is going to try and make a right turn into an upcoming shopping center.

The highway traffic is moving at a fast clip and you realize that when she tries to make the right turn, it will necessitate her potentially having to make the right turn very quickly so as to not slowdown traffic. Making such a turn rapidly might not be a good idea and could lead to her potentially hitting a pedestrian or another car that is near to the entrance into the shopping center where she is going to make her turn. You are also worried that when she starts to make the right turn, a car from behind her on the highway might be impatient and fail to notice that she is going to make a turn and might ram into her car.

Therefore, you decide to stay directly behind her and when she approaches the point of the turn, you opt to especially slowdown, pump your brakes, and otherwise try to block the traffic in your lane. This is a means of trying to help her make the turn. It is almost like playing football, wherein one player blocks others to allow the quarterback to make their move. You are creating a kind of traffic buffer so that she can make the right turn at a more measured pace and to prevent other highway traffic from potentially pressuring her or possibly even hitting her car.

What a great parent you are!

I’d like to now change the scenario just a tad. Suppose the same situation arises, but this time the driver in the car ahead of you is unknown to you. As far as you know, the driver is a complete stranger. How might that alter your actions?

Well, it could be that you are already late getting to work and the traffic has been very frustrating for you. All sorts of idiots drivers seem to be the road today. Worse still, it is raining, and this has caused traffic to go slower than normal. You are fed up with the drivers! You are fed up with those that are getting in your way as you drive! Everyone should get off the road and let you proceed.

You are driving in the right lane and all of sudden you realize that the driver in the car directly ahead of you is going to try and make a right turn into the shopping center. This dolt doesn’t seem to realize it is a tough turn to make and will likely cause traffic in the lane to have to slowdown. Not another dolt driving a car!

You move up to the bumper of the car and try to pressure them into having second thoughts about making the turn. Meanwhile, you are eyeballing the lane to your left and trying to decide if you can deftly swing into that lane and avoid having to slow down due to the inconsiderate driver that’s going to be making the right turn. You realize that you might be able to sneak into the lane to the left, though it will likely cause the driver in the car behind you to be suddenly surprised at discovering the car ahead is going to make the turn into the shopping center.

Maybe the driver behind you might even smash into the dolt. Fine. That’s two less cars on the roadway today. The driver making the turn might get a harsh lesson in how to properly drive on the highway and streets of your town. The driver behind you that perhaps smacks into the other car, well, that driver is likely a bad driver too. The two bad drivers deserve each other.

That second scenario is a doozy.

In the first scenario, you were willing to move the earth and the moon to support your daughter as she was wanting to make the turn into the shopping center. In the second scenario, you could care less about the stranger driving the car ahead of you and indeed you felt it was wrong on their part to even consider or attempt the turn.

We might say that your kinship shaped your driving behavior.

I know that it seems a bit contrived and you might object to how I setup the scenario. I’ll grant you that aspect. You could try to argue that if the other driver was merely someone that you knew or perhaps was a close friend, you might react in the same manner as you would toward a kin, or at least be more so protective than if the other driver were a complete stranger. My overarching point was that you might alter your driving behavior depending on whom is driving in another car, rather than on the driving task or situation per se.

In other words, if you know the person that is driving a car that is near you, you might well change how you drive, depending upon what your “relationship” might be associated with that person. I think we can all agree this seems plausible. I would even say it is more than plausible, it is probable that you would change your driving behavior.

I realize there are some sticklers out there that will claim they always drive the same way. They are always courteous and fair to other drivers. Always. Or, maybe they are always a jerk to other drivers and won’t change their driving behavior for anyone. They don’t care who is in the other cars. By gosh, they are going to drive as they drive, all the time, the way they do, and continue to cut-off other drivers and treat them like dirt.

I’d bet that most of us do change our driving behavior depending upon whether we know the driver that is in another car nearby us. Let’s agree to this notion.

The aspect that we might have more heartburn about would be whether kinship is the deciding factor.

You might change your driving behavior when you see that your boss is driving in the car next to you. Presumably, you would be more deferential in your driving and if for example that driver wanted to get into your lane, I’d bet that you would readily let them cut into your lane of traffic. Or, suppose you see your next door neighbor driving their car and they are trying to come into the street as they backout of their driveway, you are likely to slow down and wait for them to proceed, perhaps more so than if you were driving down a street and suddenly someone was trying to backout of their driveway (someone you didn’t know).

Driving Behaviors Dependent Upon The Known Or Unknown Other Drivers

We’ll establish then that your driving behavior can be impacted based on whom else is driving another car and for which your driving and your car will in some manner interact with that other car.

I’m not suggesting you will always suddenly become a kinder and gentler driver to accommodate the other person. Imagine if your neighbor is someone you detest because they have let their dog roam onto your grass and repeatedly left a treasure for you, and that in spite of your complaining, the neighbor has kept this practice going. In that case, I am doubting that you would patiently wait for them to backout of their driveway. You might zoom past or maybe even try to come up with some sneaky means to get them to backout of the driveway improperly and wreck their car.

In each of these instances of you changing your driving behavior, I’ve purposely tried to structure the situation to involve a complete lack of actual direct communication per se between you and the other driver. In the case of you helping out your daughter, you did not prearrange to aid her in making the right turn, nor did you give her a call on her smartphone as she was driving and explained what you were going to do. Each of the situations so far have been undertaken without any direct communication between you and the other driver.

This allows us to revisit the plants. Are the plants communicating with each other about what to do? Does one plant tell the other one that it can spread its roots and that the plant will let this happen? As earlier mentioned, it might be the case that they “talk” with each other, perhaps via using chemical signaling or the use of light signaling.

The plants though don’t necessarily need to communicate with each other directly, just as you were driving your car and did not communicate with the other drivers directly. I want to be careful how I phrase that aspect. Yes, you didn’t call or speak with the other driver. You might though have communicated in other means.

Your behavior might have been your form of communication.

When your daughter was getting ready to make her right turn, suppose she looked in her rearview mirror and was trying to gauge the nature of the traffic behind her. She maybe already realized that this was a risky turn. She was examining the traffic behind her and might have opted to forego making the right turn, which could be a sensible action if she judged that the oncoming traffic might get confused and potentially ram into her car. Better to go around the block and enter into a different entrance than to chance getting hit or otherwise boxing up traffic.

Upon looking in her rearview mirror, she noticed that the car behind her was slowing down. What luck! The other car was apparently by coincidence going to create a kind of traffic buffer and she could take advantage of it, allowing her to safety make the right turn. She might not have had any clue that the other driver was intentionally trying to help out. No matter, the facts are the facts, and the buffer creation has allowed her to proceed with the right turn.

I’d claim that we have these situations continually during our driving efforts. We have situations arise, and at times another driver might be intentionally helping you out, while in other instances the other driver’s behavior just so happens to help you out. You might not ever know what was in the mind of the other driver.

Sure, we sometimes get a hefty clue of what the other driver has in mind. The other day, I was pulling out of a mall and a car coming down the street came to a halt, the car flashed its headlights at me, and it was quite apparent that the driver was purposely acting to enable me to come out of the mall by blocking the lane for me. Was this a completely altruistic act? Does the driver deserve the courteous driver of the month award?

Yes and no. The other driver ended-up turning into the mall at the same place as I had exited, which I suppose you could suggest that the other driver wanted to get me and my car out of the way, allowing them to more easily turn into the mall. The aspect worked out in both our favors. You could argue that it was an altruistic act, or that it was a mutually beneficial act, or that it was selfishly intended for the other person and it just so happened that it aided me. Interpret it as you wish.

Our driving behavior can alter depending upon the other driver. If the other driver is someone that we know, this is a likely behavior changing factor. We might directly communicate with the other driver, or we might not. Our behavior alone might be our form of communication and no other kind of “direct” communication is undertaken.

We spontaneously collaborate with other drivers when we drive. There is not necessarily any prearranged agreement between one driver and another. There is a lot of discretion when you are driving a car. Of course, you are supposed to drive lawfully, but within the legal definitions for proper driving there is a great deal of latitude about driving.  

For my article about human foibles of driving, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For the nature of selfishness and greed in driving a car, see my article: https://www.aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/

For my article about the illegal driving acts of self-driving cars, see: https://www.aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/

For irrational behaviors and rational behaviors of driving, see my article: https://www.aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/

Autonomous Cars And AI Knowing Or Not Knowing Of Other Drivers

Consider what is going to happen once we have true AI self-driving cars on our roadways.

First, realize that not all AI self-driving cars will be the same. I emphasize this due to the commonly false notion that all AI self-driving cars will be the same. When I give presentations at conferences, I often find that people assume that the AI of self-driving cars will be entirely the same AI, no matter who developed the AI and want kind of car you are driving.

Wrong!

Let’s set the record straight. The AI of the self-driving car that is made by auto maker X or tech firm Y will be different from the AI of the self-driving car maker Z or Q. There is no underground secret agreement between these auto makers and tech firms. They are all pursuing the AI in their own proprietary ways. There might be some amount of open source in their AI systems, providing a bit of intersection, but otherwise each will be having its own idiosyncratic AI.

There is also no set standard that specifies exactly how the AI is supposed to be built and nor how it is supposed to act. Similar to how humans are able to have latitude when driving a car, this lack of any enforced specification means that the AI of each of the auto makers and tech firms can differ from that of the AI of each other in terms of how the AI drives a self-driving car.

You might readily have one AI system from say auto maker X that is a very cautious driving system. It avoids taking any chances while driving. Imagine the teenage novice driver that you sometimes get upset about when you get behind them. That could be the AI system of auto maker X, in the sense of exhibiting driving behaviors of a similar nature to the novice teenage driver.

Meanwhile, auto maker Y has gone a different path. For their AI, they decide to make it the type of driver that goes all out. It will readily be the first one to zip ahead when the light turns green at a stopped intersection. When it makes turns, they are always done with gusto. It exhibits completely different driving behaviors than does the AI system of auto maker X.

I realize you might object and claim that isn’t it possible for auto maker X to potentially “copy” the driving behaviors that are being exhibited by auto maker Y’s AI? Or, the other way around? Yes, it is. But, keep in mind that if you are trying to force both of their AI’s to become the same, I wonder why you today tolerate the aspect that there are different kinds of cars that you can buy, some that are high-powered, some that are stylish, etc.

In essence, I am suggesting that we are likely to have AI systems for self-driving cars that abide by the legal elements of driving and yet differ in other driving behavior respects. It could be that each of these auto makers and tech firms eventually decide to “copy” each other and have the same set of identical driving behaviors, though I’ve questioned this notion and tried to explain why I believe that the “commoditization” of AI self-driving cars is unlikely to occur.

For my article about commoditization, see: https://www.aitrends.com/selfdrivingcars/economic-commodity-debate-the-case-of-ai-self-driving-cars/

For coopetition among AI self-driving car makers, see my article: https://www.aitrends.com/selfdrivingcars/coopetition-and-ai-self-driving-cars/

For the bifurcation of autonomy, see my article: https://www.aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For my article about potentially starting over on the AI for self-driving cars, see: https://www.aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

We are now finally ready herein to piece together all the pieces of this puzzle. What is the puzzle? The puzzle is that I am leading you toward those plants. Yep, we are back to the plants.

We are going to have AI self-driving cars on our roadways and I hope you now agree with me that those AI systems will differ, meaning that the AI driving behaviors will also differ.

Presumably, the AI self-driving cars of auto maker X will exhibit the same driving behaviors as those of the other AI self-driving cars of auto maker X, namely, there will be a kind of kinship. There, I said it, the word kinship has now arisen.

The AI of the auto maker X will be the same and drive the same way, and we could potentially even pick out of a line-up which AI is being used by a particular “anonymous” self-driving car.

How so? If we disguised a bunch of AI self-driving cars so that we could not recognize the car maker, and we put them into a test track someplace and had those true Level 5 AI self-driving cars drive around, I dare say that based on what they do, we could tell you whose AI it was.

I’ll use my somewhat extreme example from my earlier point about auto maker X and auto maker Y. During a run in the test track, we observe that one of the self-driving cars is always being overly cautious. The other one tends to butt up against other cars and tries to jam itself into a lane. I think it would be relatively easy to guess that the cautious self-driving car was using the AI of auto maker X, while the more aggressive self-driving car was using the AI of auto maker Y.

Again, don’t discard my claims due to the extremes of suggesting one is overly cautious and the other is overly risky. There are a lot of other capabilities that will differentiate one AI self-driving system from another in terms of driving behaviors. I am only using the extremes herein to illuminate my point.

On a related aspect, I’ve already predicted that the general public will eventually come to figure out the driving behaviors of the various AI systems. This will lead to humans trying to play “pranks” on those AI systems.

If you are trying to jaywalk, and if you know that the AI of auto maker X is overly cautious, you merely step into the street and know that the AI is going to immediately halt that self-driving car for you. On the other hand, maybe auto maker Y’s AI is not so forgiving. This could lead to untoward actions by humans trying to play with the AI’s and seemingly attempt to “outsmart” them, and we might have injuries and deaths because of it. Pranks aren’t going to be a good idea, nonetheless I’m predicting it is exactly what will happen (likely leading to regulations against humans playing pranks, though I’ve argued we need to make the AI less susceptible to pranks).

For my article about pranking AI, see: https://www.aitrends.com/selfdrivingcars/pranking-of-ai-self-driving-cars/

For public shaming of AI systems, see my article: https://www.aitrends.com/selfdrivingcars/public-shaming-of-ai-systems-the-case-of-ai-self-driving-cars/

For my article about prevalence induced driving behaviors, see: https://www.aitrends.com/ai-insider/prevalence-induced-behavior-and-ai-self-driving-cars/

For safety aspects about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Heterogeneous AI Systems And Differing Behaviors

We’ve now nearly got all the pieces of the puzzle in place.

With the auto maker X and its AI that exhibits some set of driving behaviors, and with the auto maker Y and its AI that exhibits some other set of driving behaviors, we now have the potential for a kind of kinship.

The AI of X might have been developed under the assumption that other self-driving cars that are also using the AI of X will act in certain driving ways.

For other self-driving cars and their variants of however they’ve done their AI, the auto maker X is assuming those other cars are being driven by however they are being driven. This implies that the non-X AI’s are considered the same as human driving behaviors and whatever happens to appear while on the road is just how that driving is taking place.

Let’s imagine that the auto maker X has made their AI to be this sweet and kindly self-driving driver. It comes along on a highway and detects that the car ahead of it wants to make a right turn into a shopping center. Since this AI is the nice-driver, it opts to slow down and create a traffic buffer for the other car that is turning into the shopping center.

But it might be that the auto maker X’s AI does not always necessarily undertake that approach. Suppose the car ahead did not signal and did not provide any kind of in-advance indication that it was going to make that right turn. The AI of the self-driving car had no way to realize that it could help the other car. Therefore, the car ahead perhaps makes the right turn, doing so without any warning, and the AI of the self-driving car was unable to assist, not having had any means to gauge what might happen.

On the other hand, suppose that the auto maker X’s AI, being as courteous as it is, if it was making the right turn, it would have turned on its turn signal well in-advance of the turn, and perhaps lightly touched its brakes, trying to convey to any cars behind it that it was wanting to make a right turn.

Revisit the scenario. Let’s pretend we have the car making the right turn and it is a self-driving car by auto maker X and running their AI. Let’s further pretend that the car behind it is also a self-driving car, and it perchance happens to be a self-driving car also by auto maker X and running their AI. The self-driving car making the right turn is courteous and forewarns traffic, and the car behind it is courteous and upon detecting that the right turn is desired the AI opts to aid in doing so.

Kinship!

You could argue that the AI of the two self-driving cars, being of the same “bloodline” and acting in the same driving behaviors, have aided each other. You might say they did so out of kinship of each other. It is akin to you driving your car behind your daughter and opting to aid her in making her right turn. Isn’t love grand.

There wasn’t any direct communication per se about this collaboration. It was merely predicated on their native behaviors. It could be like two plants of the same bloodline, each aiding the other, doing so not necessarily because they talked about it, but due to their inherent embodied natures that click with each other.

How’s that for a plot twist!

We can now embellish the kinship aspects. I’ll add communication into the equation.

AI self-driving cars are going to be outfitted with V2V (vehicle-to-vehicle) electronic communications. This will allow AI self-driving cars to communicate with each other. You might have an AI self-driving car driving down the road and it encounters debris. This AI self-driving car sends out a broadcast to alert any other nearby AI self-driving cars that there is debris in the roadway. The other AI self-driving cars that are nearby receive the message and they switch lanes to avoid coming upon and hitting the debris. Thank goodness for V2V.

For the emergence of omnipresence due to V2V, see my article: https://www.aitrends.com/selfdrivingcars/omnipresence-ai-self-driving-cars/

For my article about V2V and edge computing, see: https://www.aitrends.com/selfdrivingcars/edge-computing-ai-self-driving-cars/

For my article about OTA, see: https://www.aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For the aspects of debris detection and coping behaviors, see my article: https://www.aitrends.com/selfdrivingcars/roadway-debris-cognition-self-driving-cars/

For my article about 5G, see: https://www.aitrends.com/selfdrivingcars/5g-and-ai-self-driving-cars/

V2V And Communications Vital To Kinship Notion

Let’s see how V2V comes to play in this kinship element.

The self-driving car trying to make the right turn into the shopping center could broadcast via V2V a message letting other nearby AI self-driving cars know that the right turn is coming up. The other AI self-driving cars would presumably receive the message and ascertain what they will do about it.

I’m sure that you are already assuming that of course the other AI’s will all be courteous and upon this V2V notification slow down and make sure that the right turn can be made safely. Why do you believe that to be the case? Because you have fallen again into the trap that all AI’s of the all different self-driving car makers will be the same. As mentioned, they won’t be.

Perhaps auto maker Y’s self-driving car happens to be behind the AI self-driving car of auto maker X that is trying to make the right turn. The AI of Y might decide to avoid the right turn situation entirely and instead switches lanes. That’s perfectly legal. There was no legal requirement that the auto maker Y self-driving car has to perform a traffic buffer block for the right turning car. That’s not written in stone someplace.

Therefore, the addition of communication does not necessarily take us out of the kinship mode. You could still claim that kinship will make a difference. It makes a difference because of the driving behaviors and how the AI of the same self-driving cars will be of the same “bloodline” and potentially therefore be more likely to play well together.

Auto maker X’s family of self-driving cars are in a sense, a family. They have familial roots. This could result in them collaborating in a manner that they would not normally do with self-driving cars outside of their family. The auto maker Y’s family is a different family. The differences between the family of X and family of Y can appear in the act of driving and performing driving tasks on the roadways.

The communications aspects can either further reinforce the family, or it could potentially allow other families to become more family-like with each other.

When the right-turning self-driving car sent out the V2V about wanting to make the right turn, it could have also asked the self-driving car behind it to please act as a traffic buffer. In that case, the other self-driving car might go against its “normal” or default tendencies and opt to serve as a traffic buffer. Of course, there is no requirement that the other self-driving car has to comply with the request. It might turn down the request and basically say, hey buddy, you are on your own there, good luck making the right turn.

The nature of how V2V is going to work is still being figured out. It is relatively easy to agree to the protocols about what kinds of messages will be sent out. Trying to also agree to the meaning of the messages and whether or not other self-driving cars and their AI need to abide by things like requests, well, it’s a lot harder to settle those aspects.

For my article about the boundaries of AI, see: https://www.aitrends.com/selfdrivingcars/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For the potential for a singularity, see my article: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For the use of a Turing Test for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

For whether AI will become a super-intelligence, see my article: https://www.aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

Conclusion

I know that some readers might misinterpret my remarks and inappropriately commingle together this notion of kinship and bloodline as though I am suggesting that the AI of self-driving cars is going to be sentient.

It would almost be too easy to take that tack. If we said that AI will become sentient and it was therefore presumably like humans, I would guess that we’d all more easily accept the idea that they would therefore have kinship with each other.

I am not at all using the sentient get-out-of-jail free card, which is often used when you cannot otherwise find yourself out of an AI related pickle. You just toss down the freebie card and say that by magic the AI will become sentient.

In fact, I purposely walked you through the research about plants, doing so to shift you away from the sentience topic and avoid having to go there. When you think about kinship for living animals and humans, it is immediately coated with the sentient topic. On the other hand, bringing up plants, well, it’s a means to get away from the sentient thing and focus instead on more rudimentary aspects.

I believe it is a pretty evident open-and-shut case that the AI of one self-driving car maker is going to likely be a better “fit” in terms of driving behaviors with the self-driving cars of its own ilk. I suppose an AI developer could go out of their way to make me wrong on that aspect, and purposely try to make their AI self-driving cars combative with their own kind. Good luck on that. I doubt it’s going to get you many brownie points.

For my article about egocentric AI developers, see: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For groupthink dangers among AI developers, see my article: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For internal naysayers and AI developers, see my article: https://www.aitrends.com/selfdrivingcars/internal-naysayers-and-ai-self-driving-cars/

For my article about the repercussions of burnt out AI developers, see: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my article about road rage, see: https://www.aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/

We need to consider the ramifications of having different styles of self-driving car driving, meaning that the AI’s will differ across the auto maker or tech firm making the AI. Though they will all presumably be driving in a legal manner, there is a lot more to driving than just driving by the law. The latitude and how you act and react to other drivers, and what you do in various driving situations are based on driving behaviors.

During the time period when we will have AI self-driving cars mixing with human driven cars, the ante is increased because now you’ll be faced with AI with other AI, or is it AI versus other AI, along with AI with humans and their driving, or is it AI versus humans and their driving. We might get some amount of road rage by humans that don’t like the driving behaviors of some of the AI’s. If you are intent on acting out toward an AI self-driving car, you’d better think twice, perhaps the AI developer planted a road rage reaction routine and you won’t like what happens once you activate it. Hey, I wonder if I just planted that seed.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Air Force Partners with MIT to Accelerate AI in 10 Project Areas

A new program that will strive to make “fundamental advances” in artificial intelligence is coming from the Air Force and MIT, the two organizations announced on May 20.

The “MIT-Air Force AI Accelerator” will support at least 10 MIT research projects in areas like disaster relief, medical readiness, data management, maintenance and logistics, vehicle safety and cyber resiliency. Project teams will be made up of MIT faculty, staff and students, as well as members of the Air Force. The Air Force plans to invest around $15 million per year in the collaboration.

“MIT is the leading institution for AI research, education, and application, making this a huge opportunity for the Air Force as we deepen and expand our scientific and technical enterprise,” Heather Wilson, secretary of the Air Force, said in a statement. “Drawing from one of the best of American research universities is vital.”

“This collaboration is very much in line with MIT’s core value of service to the nation,” Maria Zuber, the school’s vice president for research and the E.A. Griswold Professor of Geophysics, said. “MIT researchers who choose to participate will bring state-of-the-art expertise in AI to advance Air Force mission areas and help train Air Force personnel in applications of AI.”

The accelerator is just the latest thread in the Air Force’s — and Department of Defense’s — interest in artificial intelligence.

In June 2018 the DOD launched its Joint AI Center (JAIC), with the goal of exploring and developing the defense agency’s use of the “profoundly significant” technology that is artificial intelligence. Among its various goals, JAIC aims to increase the DOD’s collaboration with academia and the private sector.

The Air Force also runs another kind of accelerator — a focused, short-term program for up and coming tech startups — in partnership with TechStars in Boston. The program introduced its 2019 cohort in February.

The effort, known as the MIT-Air Force AI Accelerator, will leverage the expertise and resources of MIT and the Air Force to conduct fundamental research directed at enabling rapid prototyping, scaling, and application of AI algorithms and systems. The Air Force plans to invest approximately $15 million per year as it builds upon its five-decade relationship with MIT.

The AI Accelerator can include faculty, staff, and students in all five MIT schools, and will be a component of the new MIT Stephen A. Schwarzman College of Computing, opening this fall. The college will take a strongly interdisciplinary approach to computing, and focus on the societal implications of computing and AI. The MIT-Air Force program will be housed in MIT’s Beaver Works facility, an innovation center located in the Technology Square block of Kendall Square. MIT Lincoln Laboratory, a U.S. Department of Defense federally funded research and development center, will make available its specialized facilities and resources to support Air Force mission requirements.

Read the source article at fedscoop.

How Deep Learning is Incrementally Changing Your Life

By AI Trends Staff

Over the past four years, readers have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies. 

The speech recognition functions on our smartphone work much better than they used to. We are increasingly interacting with our computers by talking to them, whether it’s with Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana or the many voice-responsive features of Google.

Chinese search giant Baidu reports customers have tripled their use of speech interfaces in the past 18 months.

Image recognition has advanced. Google, Microsoft, Facebook and Baidu all have features that allow searches and automatic organizing of collections of photos with no identifying tags. You can ask to be shown, say, all the ones that have dogs in them, or snow, or even something fairly abstract like hugs.

Medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, enabling them for example to diagnose cancer earlier and less invasively.  

All these breakthroughs have  been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning. Many scientists still prefer to call them by their original academic designation: deep neural networks.

Instead of writing code, programmers today feed the computer a learning algorithm, then expose it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it. The computer figures out for itself how to recognize the desired objects, words, or sentences.

In short, such computers can now teach themselves. “You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia, which began placing a massive bet on deep learning about five years ago.

Read the source article in Forbes.

Why the Government of Tomorrow is Also a Data Organisation

Nowadays, every organisation is a data organisation. This not only applies to commercial organisations, but also to governments. Governments at every level – local, regional, national and supranational – should take a different approach to organise their activities.

However, becoming a data organisation is not an easy feat. It requires governments to rethink all their processes and all citizen touchpoints. Even though governments do not have such fierce competition as organisations do, they too have to adopt technologies such as predictive analytics, blockchain and AI to improve their services.

As part of my new book – The Organisation of Tomorrow – I have developed a model that can help organisations become a data organisation. The D2 + A2 Model helps organisations to datafy their processes, distribute their data via the cloud or using distributed ledger technologies, analyse their data using descriptive or predictive analytics to sense and seize opportunities and automate their decision-making and processes using AI. It is a simple four-step roadmap for enterprises and governments to transform their organisation.

The D2 + A2 Model for Governments

Governments are a collection of processes built on top of each other over long periods of times. As a result, almost all governments have bureaucratic processes ...


Read More on Datafloq

Build Your Intelligent Enterprise through a Data Fabric

The future offers interesting and exciting times ahead for most businesses. With data being a big influencer in the enterprise of the future, it is a matter of time before we jump into the era of intelligent enterprises.

Intelligent enterprises are going to be organizations that offer exemplary customer experience and have an efficient operations procedure. The ability to give clients what they want throughout their experience with an organization is what separates an intelligent enterprise apart from the others.

With competition between organizations expected to increase with the passage of time, an intelligent enterprise would be a need of the hour. Organizations would want to work efficiently in production and to deliver the most to customers across their experience.

Ronald van Loon recently had the opportunity to attend SAP Sapphire and speak to some of the most renowned Chief Data Officers (CDO) from across the globe. As part of this venture, both SAP and Ronald van Loon got a lot to learn about the data culture of today, and how the intelligent enterprise of the future can be built through a data fabric.

Main Challenges CDOs Face

CDOs currently face numerous challenges when it comes to building a proper intelligent enterprise. While the prospects ...


Read More on Datafloq

Thursday, 30 May 2019

6 Important Steps to Building a Successful Factory of the Future

What is the factory of the future? Is it a synonym to Industry 4.0, or is it a different concept in its own right? Industry 4.0 and the factory of the future might sound similar, but they are different in some ways. To begin with, the factory of the future is an elusive concept that isn’t as common as Industry 4.0.

The factory of the future is meant to be your gateway to the future of AI and IoT. Through exploring my interest in the topic and as an Oracle ambassador, I had the opportunity to speak to Hans Michael Krause from Bosch Rexroth about the future of IoT systems. Michael, who is the Director of Product Management PLC and IoT systems, is an industry expert and knows quite a bit about the subject.

Talking to me Michael said that, ‘the factory of the future has elements of Industry 4.0, but it is integrally different from it. The factory of the future also talks about distributing your energy via inductive factory floor. It involves 5G connectivity and is much more than the bland digitization offered by Industry 4.0. The factory of the future also involves collaborative robots, which are counted as a ...


Read More on Datafloq

Is 2019 The Year Finance Discovers Blockchain?

The contemporary market is inundated with new stories about blockchain on a daily basis, but few analysts who are keeping an eye on this technology have been paying enough attention to how much the financial sector is beginning to embrace this exciting innovation. It’s increasingly becoming clear that 2018 could very well be the year that finance discovers blockchain in a meaningful way for the first time, which could usher in a new era for this burgeoning digital technology.

Here’s why financial institutions and the finance sector are gradually beginning to warm up to blockchain, and what that means for this technology going forward.

Blockchain is now the cool kid on the block

Blockchain was once a business anathema which those in the financial sector shunned to the greatest extent possible, largely because they didn’t trust its promising potential. Things have changed quite a bit over the past few years, however, and blockchain is now the cool kid on the block, vacuuming up droves of media attention while businesses and savvy individuals pour millions into its development. Such a change has there been to blockchain’s reputation that major banks and financial institutions are beginning to adopt it into their operations and future plans.

A ...


Read More on Datafloq

How Real-Time Analytics Solve Performance Issues Across Multiple Industries

Real-time data analytics is still a relatively new concept, but it is changing the logistics of countless businesses across the world. One poll showed that 60% of companies use real-time data analytics for better customer service. However, there are many other applications of real-time analytics.

Data analytics experts must familiarize themselves with the technical aspects of this new field of data science, so they can utilize its full potential. Decisionmakers must consider the applications of real-time analytics to get the most value out of it.

The principles of real-time analytics

Science Soft has a detailed overview of the technology behind real-time data analytics. The tutorial points out that data analytics tools like Citrix monitoring software have the capacity to both push and pull data from their servers. Real-time analytics applications must continually pull data and have algorithms that are capable of processing it quickly.

There are two ways that data can be pulled from the database in real-time. The ideal approach is to stream data because it will be displayed with virtually no time lag. Unfortunately, data streaming is not always feasible. Scalability is usually a significant limitation because there is a bottleneck as data volume increases. Therefore, streaming is typically only a practical ...


Read More on Datafloq

Wednesday, 29 May 2019

India has the potential to lead electric two-wheeler race: Sachin Bansal

In an interview with ET, Bansal said Ather has plans to sell over a million units annually in the next five years and reckons the company has the potential to lead the EV two-wheeler space.

Tuesday, 28 May 2019

How AI Came to the Rescue of Scientists Studying the Sun

By Andy Thurai, Emerging Technology Strategist, Oracle Cloud Infrastructure

In 2014, NASA lost a crucial instrument housed on the Solar Dynamics Observatory (SDO) satellite that measured extreme UV rays coming from the sun. With repair costs ranging from the millions to billions of dollars, a team from NASA Frontier Development Lab and IBM turned to artificial intelligence and historic data to see if a well-trained model could fill the data void.

I was very intrigued when I heard about this project from my good friends at NASA FDL and IBM recently. What if artificial intelligence can decipher more than images of dogs, cats, and stop signs? What could we learn from looking at images of the sun?

Sunshine is Good for the Soul

The sun is considered the life creator in mythological, mystical and scientific worlds. While there may be other similar solar systems with a star providing as much value as the sun does to us, the impact of the sun in our solar system is immeasurable. Any changes in the sun’s power, however small they are, whether they are sun spots, solar flares, or coronal mass ejections, all affect the earth directly. When the sun misbehaves like that, it affects systems such as Global Navigation systems (GPS), satellites, radio systems, computers, cell phones, electrical systems, air traffic control, and electric power.

For instance, our last major solar storm event on July 23, 2012, the strongest recorded to date, missed the earth by about a week. If it had hit earth directly, it could have had a “catastrophic effect,” blowing out the majority of electrical, electronic, and communications systems in the world. A study by The National Academy of Sciences estimates a direct hit by such a storm could cause damages as high as $2 trillion. A much weaker event in March 1989 knocked out power for the entire province of Quebec for weeks.

Solar Dynamics Observatory is a Cool Satellite

To keep tabs on such solar shenanigans, NASA launched the Solar Dynamics Observatory (SDO) satellite in 2010 at a cost of about $850 million. The SDO satellite collects various measurements from the sun in hopes of forecasting solar storms and mitigating their effects in and around earth’s space.

The SDO satellite has three major instrument components:

  • Atmospheric Imaging Assembly (AIA) – captures images of the solar atmosphere in multiple wavelengths (up to 10) for every 10 seconds in IMAX resolution (x10 times the precision of HD images). In other words, this measures what is happening in the sun’s atmosphere.
  • EUV Variability Experiment (EVE) – measures the solar extreme ultraviolet radiation (EUV) to understand the influence on earth’s (and near-earth space’s) climate changes.
  • Helioseismic and Magnetic Imager (HMI) – studies the oscillations and the magnetic field at the solar surface, or photosphere.

Together these three instruments continuously monitored the sun, producing about 1 TB of data every day.

But the Cool Satellite Broke!

In 2014, a critical component of the EVE instrument broke, and true EUV measurements were no longer available to satellite operators.

This was bad news. First, the EUV varies the most in the sun’s spectrum so a constant measure above earth’s atmosphere can give us good insight. Second, EUV photons emanating from the sun are absorbed in the upper atmosphere and in the ionosphere, so taking a measurement above those layers is very critical. Third, these extreme variations in EUV can cause dramatic effects on the earth’s outer atmosphere. It could potentially cause the outer atmosphere to balloon much bigger than it normally is, which can have costly effects on all other satellites.

Fixing the issue was prohibitively expensive, ranging from sending a manned mission to the satellite (costing upward of $500 million) or launching a new satellite which might cost around a $1 billion.

As those options were not viable, the scientists and engineers from at NASA FDL, IBM, and Nimbix came up with a thought: could AI provide a solution?

AI to the Rescue

The three instruments on the SDO worked well from 2010 to mid-2014. Could deep learning neural networks predict the missing EVE data based on analyzing terabytes of data from the past four years with hundreds of possible models and variations?

“Imagine that you had listened to a symphony playing music for four years,” said Graham Mackintosh at NASA FDL, “and then one of the musicians suddenly stopped playing. Would you be able to mentally fill in the missing music from the performer who had gone silent? This is what the NASA FDL team wanted to do with the symphony of data coming from NASA’s Solar Dynamics Observatory.”

Lucky for the engineers, the AIA and the EVE had produced four years of harmonious data—high resolution images of the sun and corresponding EUV measurements—with which to create models and test them. As I preach often, AI is based on the quality and quantity of data that you use to create the model/algorithm. The research team created a “Machine Learning bake off,” in which they created 1,000s of models to validate the hypothesis. After trying multiple architectures such as Linear, Multi-layer Perceptron (MLP), Convolutional Neural Networks (CNN), and Augmented CNN, they determined that Augmented CNN closely fit their needs.

CNN is the sub-section of AI that specializes in analyzing visual imagery and in “deep learning” from images. For example, when you are analyzing an image, it is not enough to recognize a certain gesture, but you also need to understand what that gesture means in a certain culture. Similarly, the scientists wanted to analyze the superior images of the sun generated by AIA and predict the EUV radiation measurements.

Using MacGyver-ish AI Tools

Impressively, the engineers took to the task using only common software and hardware tools: Jupyter notebook, PyTorch, NVIDIA GPUs, IBM Watson AI, and Nimbix cloud provider to host this all. Each one of the tools was chosen for a specific reason. Jupyter notebook is the easiest way for engineers to collaborate. NVIDIA is the best GPU available today. IBM AI tools are designed to solve enterprise AI problems. And Nimbix cloud is the best AI cloud out there.

The team broke the dataset into four parts. The model was able to crunch the first year’s worth of daily TB data and created a solid AI model. After multiple iterations and training using a full year of data, the engineers put it to task on a second year’s worth of data. After learning from that, they used a third year of data to retrain the model and finally put the final model to test on the fourth year of data.

The results were 97.5% accurate. Now, AI can process the high-quality images generated by the AIA and supply EUV data for the years since the EVE has not been working (mid 2014 to date).

If AI can figure out the missing data from sun based on current inputs, could we also predict the EUV spectra into the future with precision ahead of time? Predicting solar changes would have dramatic impacts on earth.  Moreover, this technique for using AI to fill in “data gaps,” based on surrounding information, could be used in other applications, such as an IoT installation when a sensor malfunctions, or if there is a portion of a customer satisfaction survey that is often, but not always, skipped by clients of a financial services company. As is often the case with AI, the sky is the limit!

Andy Thurai  is an accomplished professional with 25+ years of experience in technical, biz dev and architecture leadership positions. Learn more at his LinkedIn page.