Data Science, Machine Learning, Natural Language Processing, Text Analysis, Recommendation Engine, R, Python
Monday, 30 April 2018
Geek Trivia: In The Catholic Church, The Patron Saint Of Cooks And Chefs Is Also The Patron Saint Of?
Facebook Is Testing Reddit-Style Downvote Buttons
T-Mobile Will Buy Sprint For $26.5 Billion, If the FCC Approves

After years of toying with the idea, T-Mobile is finally buying Sprint, for $26.5 billion.
Click Here to Continue Reading
How to Disable Nearby Sharing on Windows 10
How to Disable the Timeline on Windows 10
7 Of The Best Mobile Time Wasters (For Your Monday Morning and Beyond)

Your smartphone is effectively a portable PC, providing you with endless access to information.
Click Here to Continue Reading
Use Windows 10’s New “Free Up Space” Tool to Clean Up Your Hard Drive
How to See What Data Windows 10 is Sending to Microsoft
Everything New in Windows 10’s April 2018 Update, Available April 30
Sunday, 29 April 2018
Don’t Want the April 2018 Update Yet? Here’s How to Pause It
Geek Trivia: Which Of These Countries Has The Safest Electrical Plug Design?
Why the Facebook Microphone Myth Persists
Saturday, 28 April 2018
Geek Trivia: Inspector Gadget Was Voiced By The Same Actor Who Portrayed Which Of These Famous Spies?
MoviePass Was a Great Deal, Now It’s Trying Really Hard Not To Be
How to Fix Plex Showing the Wrong Movie or TV Show
Friday, 27 April 2018
Geek Trivia: Which Of These Music Stars Founded Their Own Internet Service Provider?
Jenkins Essentials: The days of versions are numbered
A couple weeks ago, I wrote about the Jenkins Essentials effort, on which we’ve been making steady progress. Personally, the most exciting challenge of this project of defining the machinery to drive automatic updates of Jenkins Essentials, which viewed from a high level, are classic continuous delivery challenges.
In this post, I wanted to dive into a bit of the gritty details of how we’re going to be delivering Jenkins Essentials with automatic updates, which has some really interesting requirements for the development of Jenkins itself.

The traditional Jenkins core and plugin development workflow involves a developer working on changes for some amount of time, then when they’re ready, they "create a release" which typically involves publishing artifacts to our Artifactory, and then on a timer (typically hourly) the Update Center will re-generated a file called update-center.json. Once the new Update Center has been generated, it is published and consumed by Jenkins installations within 24 hours. Of course, only after Jenkins administrators recognize that there is an update available, can they install it. All in all, it can take quite a long time from when a developer publishes a release, to when it is successfully used by an end-user.
With our desire to make Jenkins Essentials updates seamless and automatic, the status quo clearly was not going to work. Our shift in thinking has required a couple simultaneous efforts to make this more continuously delivered approach viable.
Developer Improvements
Starting from the developer’s workflow, Jesse Glick has been working on publishing "incremental builds" of artifacts into a special Maven repository in Artifactory. Much of his work is described in the very thorough Jenkins Enhancement Proposal 305. This support, which is now live on ci.jenkins.io allows plugin developers to publish versioned changes from pull requests and branches to the incrementals repository. Not only does this make it much easier for Jenkins Essentials to deliver changes closer to the HEAD of master branches, it also unlocks lots of flexibility for Jenkins developers who coordinate changes across matrices of plugins and core, as occasionally is necessary for Jenkins Pipeline, Credentials, Blue Ocean, and a number of other foundational components of a modern Jenkins install.
In a follow-up blog post, Jesse is going to go into much more detail on some of the access control and tooling changes he had to solve to make this incrementals machinery work.
Of course, incremental builds are only a piece of the puzzle, with those artifacts, Jenkins Essentials has to be able to do something useful with them!
Update Improvements
The number one requirement, from my perspective, for the automatically updated distribution is that it is safe. Safety means that a user doesn’t need to be involved in the update process, and if something is to go wrong, the instance can recover without the user needing to do anything to remediate a "bad code deploy."
In my previous post on the subject, I mentioned Baptiste’s work on Jenkins Enhancement Proposal 302 which describes the "data safety" system for safely applying updates, and in case of failure, rolling back.
The next obvious question is "what’s failure?" which Baptiste spent some time exploring and implementing in two more designs:
On the server side, of which there is substantial work for Jenkins Essentials, these concepts integrate with the concept of a Update Lifecycle between the server and client. In essence, the server side must be able to deliver the right updates, to the right client, and avoid delivering tainted updates, those with known problems, to clients. While this part of the work is still on-going, tremendous progress has been made over the past couple weeks in ensuring that updates can be safely, securely, and automatically delivered.
With the ability to identify "bad code deploys", and having a mechanism for safely rolling back, not only does Jenkins Essentials allow seamless updates, but it enables Jenkins developers to deliver features and bugfixes much more quickly than our current distribution model allows.
While Jenkins Essentials does not have a package ready for broad consumption yet, we’re rapidly closing in on the completion of our first milestone which ties all of these automatic update components together and builds the foundation for continuous delivery of all subsequent improvements.
You can follow our progress in the jenkins-infra/evergreen repository, or join us in our Gitter chat!
Windows 10’s April 2018 Update Barely Squeezes Into April On Monday
Here’s How to Back Up Your Mac Now That Apple’s Done Making Routers
Amazon Is Raising the Price of Prime Again to $119 Per Year

In 2014, Amazon raised the yearly price of its Prime subscription from $79 per year to $99.
Click Here to Continue Reading
Alternative Ubuntu Versions Are Also Out This Week
How to Avoid Computer Eye Strain and Keep Your Eyes Healthy
How to Use a USB Flash Drive with Android
Sensor Push Review: The Best Smart Hygrometer and Thermometer Around

Whether you want to keep an eye on a musty basement, a cigar collection, a premium guitar, or your baby’s nursery, the line of Sensor Pus…
Click Here to Continue Reading
Spotify Free vs. Premium: Is it Worth Upgrading?
How Good is VR in 2018? Is It Worth Buying?
iTunes Is Now in the Microsoft Store
How to Use Alexa Blueprints to Create Your Own Alexa Skills
Thursday, 26 April 2018
Geek Trivia: Up To One Fifth Of The Legionnaires’ Disease Outbreaks In England Were Linked To?
Women in Big Data Lunch Panel @ DataWorks Summit Berlin
This post was originally published on the DataWorks Summit Blog. I had the honor and pleasure to moderate the panel for the “Women in Big Data Lunch” on April 18, during Dataworks Summit in Berlin. Our distinctive panelists included: Mandy Chessell – Distinguished Engineer, IBM Ellen Koenig – Data Scientist, formerly of Europace AG Tina […]
The post Women in Big Data Lunch Panel @ DataWorks Summit Berlin appeared first on Hortonworks.
Face It, Apple Products Are Just Better Value

Good value isn’t just about price—it’s about what you get for the price—and while Apple products are certainly expensive, they’re generally bett…
Click Here to Continue Reading
Hey Google, Why Do You Have Four Different Task Apps?
Business Folk Need and Want Big Data — The Issue Is Teaching Them
Alexa Can Now Tell Your Kids to Say Please
Despite Banning Paid Reviews, Amazon Still Has a Ton of Them

Amazon doesn’t want reviewers getting paid to say nice things about products.
Click Here to Continue Reading
Messaging Can Be Open or Secure (But Probably Not Both)
How To Restrict Data Input In Excel With Data Validation
How to Install LineageOS on Android
The Best Must-Have Exclusive Games For the PS4

Every console has a handful of unique games that you can’t get anywhere else.
Click Here to Continue Reading
Can you Replace the Battery in Your MacBook?
What’s New in Ubuntu 18.04 LTS “Bionic Beaver”, Available Now
How to Uninstall Emoji on Ubuntu
The Six Pillars of an Effective Business Process
New YouTube Kids Setting Allows Only Videos Reviewed By Actual Humans
How to Disable Facebook Messenger’s Chat Heads on Android
Wednesday, 25 April 2018
Geek Trivia: The Japanese Style Of Horizontally-Read Emoticons Are Called?
Nintendo Labo Review: A Fun Engineering Workshop Wrapped In Cardboard

Nintendo wants to sell you cardboard and, against all odds, we’re on board with this proposition.
Click Here to Continue Reading
Google Launches Dedicated Tasks App for Android and iOS
The Echo Dot Kids Edition Is a More Expensive Echo For Children

Kids love toys they can interact with, and apparently the Echo Dot fits that bill.
Click Here to Continue Reading
The New Gmail Interface Launches Today
How To Stop All Of Amazon’s Many (MANY) Emails
How to Use a USB Flash Drive with Your iPhone
The Best USB Charging Stations For Families

We’ve got more gadgets than ever and now even our kids are getting in on the gadget game.
Click Here to Continue Reading
What To Do If Your Mac Gets Stolen
Gigabit Ethernet vs. Fast Ethernet: What’s the Difference?
List the areas getting benefit from Big Data analytics
Configuring a Jenkins Pipeline using a YAML file
| This guest post was originally published on Wolox’s Medium account here. |
A few years ago our CTO wrote about building a Continuous Integration server for Ruby On Rails using Jenkins and docker. The solution has been our CI pipeline for the past years until we recently decided to make an upgrade. Why?
-
Jenkins version was way out of date and it was getting difficult to upgrade
-
Wolox has grown significantly over the past years and we’ve been experiencing scaling issues
-
Very few people knew how to fix any issues with the server
-
Configuring jobs was not an easy task and that made our project kickoff process slower
-
Making changes to the commands that each job runs was not easy and not many people had permissions to do so. Wolox has a wide range of projects, with a wide variety of languages which made this problem even bigger.
Taking into account these problems, we started digging into the newest version of Jenkins to see how we could improve our CI. We needed to build a new CI that could, at least, address the following:
-
Projects must be built using Docker. Our projects depend on one or multiple docker images to run (app, database, redis, etc)
-
Easy to configure and replicate if necessary
-
Easy to add a new project
-
Easy to change the building steps. Everyone working on the project should be able to change if they want to run npm install or yarn install.
Installing Jenkins and Docker
Installing Jenkins is straightforward. You can visit Jenkins Installation page and choose the option that best suits your needs.
Here are the steps we followed to install Jenkins in AWS:
sudo rpm — import https://pkg.jenkins.io/debian/jenkins.io.key
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins.io/redhat/jenkins.repo
sudo yum install java-1.8.0 -y
sudo yum remove java-1.7.0-openjdk -y
sudo yum install jenkins -y
sudo yum update -y
sudo yum install -y docker
Automatically adding projects from Github
Adding projects automatically from Github can be achieved using the GitHub Branch Source Plugin. It allows Jenkins to scan a GitHub organization for projects that match certain rules and add them to Jenkins automatically. The only constraint that all branches must meet in order to be added is that they contain a Jenkinsfile that explains how to build the project.
Easy to change configuration
Not so easy to change configuration
One of the biggest pains we had with our previous Jenkins was the difficulty of changing the steps necessary to build the project. If you looked at a project’s build steps, you would find something like this:
#!/bin/bash +x
set -e
# Remove unnecessary files
echo -e "\033[34mRemoving unnecessary files...\033[0m"
rm -f log/*.log &> /dev/null || true &> /dev/null
rm -rf public/uploads/* &> /dev/null || true &> /dev/null
# Build Project
echo -e "\033[34mBuilding Project...\033[0m"
docker-compose --project-name=${JOB_NAME} build
# Prepare test database
COMMAND="bundle exec rake db:drop db:create db:migrate"
echo -e "\033[34mRunning: $COMMAND\033[0m"
docker-compose --project-name=${JOB_NAME} run \
-e RAILS_ENV=test web $COMMAND
# Run tests
COMMAND="bundle exec rspec spec"
echo -e "\033[34mRunning: $COMMAND\033[0m"
unbuffer docker-compose --project-name=${JOB_NAME} run web $COMMAND
# Run rubocop lint
COMMAND="bundle exec rubocop app spec -R --format simple"
echo -e "\033[34mRunning: $COMMAND\033[0m"
unbuffer docker-compose --project-name=${JOB_NAME} run -e RUBYOPT="-Ku" web $COMMAND
And some post build steps that cleaned up the docker:
#!/bin/bash +x
docker-compose --project-name=${JOB_NAME} stop &> /dev/null || true &> /dev/null
docker-compose --project-name=${JOB_NAME} rm --force &> /dev/null || true &> /dev/null
docker stop `docker ps -a -q -f status=exited` &> /dev/null || true &> /dev/null
docker rm -v `docker ps -a -q -f status=exited` &> /dev/null || true &> /dev/null
docker rmi `docker images --filter 'dangling=true' -q --no-trunc` &> /dev/null || true &> /dev/null
Although these commands are not complex, changing any of them required someone with permissions to modify the job and an understanding ofwhat needed to be done.
Jenkinsfile to the rescue… or not
With the current Jenkins version, we can take advantage of Jenkins Pipeline and model our build flow in a file. This file is checked into the repository and, therefore, anyone with access to it can change the build steps. Yay!
Jenkins Pipeline even has support for:
-
Docker and multiple images can be used for a build!
-
Setting environment variables with withEnv and many other built -in functions that can be found here.
This makes a perfect case for Wolox. We can have our build configuration in a file that’s checked into the repository and can be changed by anyone with write access to it. However, a Jenkinsfile for a simple rails project would look something like this:
# sample Jenkinsfile. Might not compile
node {
checkout scm
withEnv(['MYTOOL_HOME=/usr/local/mytool']) {
docker.image("postgres:9.2").withRun() { db ->
withEnv(['DB_USERNAME=postgres', 'DB_PASSWORD=', "DB_HOST=db", "DB_PORT=5432"]) {
docker.image("redis:X").withRun() { redis ->
withEnv(["REDIS_URL=redis://redis"]) {
docker.build(imageName, "--file .woloxci/Dockerfile .").inside("--link ${db.id}:postgres --link ${redis.id}:redis") {
sh "rake db:create"
sh "rake db:migrate"
sh "bundle exec rspec spec"
}
}
}
}
}
}
}
This file is not only difficult to read, but also difficult to change. It’s quite easy to break things if you’re not familiar with Groovy and even easier if you know nothing about how Jenkins’ pipeline works. Changing or adding a new Docker image isn’t straightforward and might lead to confusion.
Configuring Jenkins Pipeline via YAML
Personally, I’ve always envied simple configuration files for CIs and this time it was our chance to build CI that could be configured using a YAML file. After some analysis we concluded that a YAML like this one would suffice:
config:
dockerfile: .woloxci/Dockerfile
project_name: some-project-name
services:
- postgresql
- redis
steps:
analysis:
- bundle exec rubocop -R app spec --format simple
- bundle exec rubycritic --path ./analysis --minimum-score 80 --no-browser
setup_db:
- bundle exec rails db:create
- bundle exec rails db:schema:load
test:
- bundle exec rspec
security:
- bundle exec brakeman --exit-on-error
audit:
- bundle audit check --update
environment:
RAILS_ENV: test
GIT_COMMITTER_NAME: a
GIT_COMMITTER_EMAIL: b
LANG: C.UTF-8
It outlines some basic configuration for the project, environment variables that need to be present during the run, dependentservices, and our build steps.
Jenkinsfile + Shared Libraries = WoloxCI
After investigating for a while about Jenkins and the pipeline, we found that we could extend it with shared libraries. Shared libraries are written in groovy and can be imported into the pipeline and executed when necessary.
If you look carefully at this Jenkinsfile, we see that the code is a chain of methods calls that receive a closure, where we execute another method passing a new closure to it.
# sample Jenkinsfile. Might not compile
node {
checkout scm
withEnv(['MYTOOL_HOME=/usr/local/mytool']) {
docker.image("postgres:9.2").withRun() { db ->
withEnv(['DB_USERNAME=postgres', 'DB_PASSWORD=', "DB_HOST=db", "DB_PORT=5432"]) {
docker.image("redis:X").withRun() { redis ->
withEnv(["REDIS_URL=redis://redis"]) {
docker.build(imageName, "--file .woloxci/Dockerfile .").inside("--link ${db.id}:postgres --link ${redis.id}:redis") {
sh "rake db:create"
sh "rake db:migrate"
sh "bundle exec rspec spec"
}
}
}
}
}
}
}
Groovy is flexible enough to allow this same declarative code to be created at runtime, making our dream of using a YAML to configure our job come true!
Introducing Wolox-CI
That’s how wolox-ci was born- our shared library for Jenkins!
With wolox-ci, our Jenkinsfile is now reduced to:
@Library('wolox-ci') _
node {
checkout scm
woloxCi('.woloxci/config.yml');
}
Now it simply checks out the code and then calls wolox-ci. The library reads yaml file like this one
config:
dockerfile: .woloxci/Dockerfile
project_name: some-project-name
services:
- postgresql
- redis
steps:
analysis:
- bundle exec rubocop -R app spec --format simple
- bundle exec rubycritic --path ./analysis --minimum-score 80 --no-browser
setup_db:
- bundle exec rails db:create
- bundle exec rails db:schema:load
test:
- bundle exec rspec
security:
- bundle exec brakeman --exit-on-error
audit:
- bundle audit check --update
environment:
RAILS_ENV: test
GIT_COMMITTER_NAME: a
GIT_COMMITTER_EMAIL: b
LANG: C.UTF-8
and builds the Jenkinsfile to get your job running on the fly.
The nice part about having a shared library is that we can extend and fix our library in a centralized way. Once we add new code, the library is automatically updated in Jenkins which will notify all of our jobs with the update.
Since we have projects in different languages we use Docker to build the testing environment. WoloxCI assumes there is a Dockerfile to build and will run all the specified commands inside the container.
Woloxci config.yml
Config
The first part of the config.yml file specifies some basic configuration: project’s name and Dockerfile location. The Dockerfile is used to build the image where the commands will be run.
Services
This section describes which services will be exposed to the container. Out of the box, WoloxCI has support for postgresql, mssql and redis. You can also specify the docker image version you want! It is not hard to add a new service. You just need to add the corresponding file at
and modify how the services are parsed
Wrapping up
WoloxCI is still being tested with a not-so-small sample of our projects. The possibility of changing the build steps through a YAML file makes it accessible for everyone and that is a great improvement in our CI workflow.
Docker gives us the possibility of easily changing the programming language without making any changes to our Jenkins installation and Jenkins’ Github Organization feature automatically adds new projects when a new repository with a Jenkinsfile is detected.
All of these improvements have reduced the time we spend maintaining Jenkins significantly and give us the possibility of easily scaling without any extra configuration.
This library is working in our CI but it still can be improved. If you would like to add features, feel free to contribute!
How To Download All Your Instagram Photos
6 Things All New Home Server Users Should Have
Tuesday, 24 April 2018
Data Marketplaces Powered by Blockchain
As much as data marketplaces seem viable, they have not gained traction still. There are some marketplaces that are available for industry/public data sets like D&B Data Exchange, AWS Public Datasets etc. However, the vision for ecosystems to start sharing enterprise data sets is yet to materialize. More than any of the other technical reasons […]
The post Data Marketplaces Powered by Blockchain appeared first on Hortonworks.
The Best Luxury iPhone Cases (To Match Your Designer Purse)

Want an iPhone case that will keep your phone safe without looking like piece of safety equipment?
Click Here to Continue Reading
Geek Trivia: Jumboisation Is A Technique For Enlarging?
Spotify’s Free Tier Now Offers (Some) On-Demand Music on Mobile
Uber Might Stop Drivers From Looking Up Your Address History
How To Force Microsoft Excel To Show Leading Zeroes
Instant Pot Review: If You Buy One Kitchen Appliance, Buy This One

Everyone has that one friend who bought an Instant Pot and won’t shut up about how awesome it is and how you should get one—and th…
Click Here to Continue Reading
How to Move Ubuntu’s Launcher Bar to the Bottom or Right
What Can You Do with Samsung Health?
Six Tips to Help Save Yourself from Poor Computer Posture
Facebook Insists You’re Not The Product
Should You Build Your Own DIY Security System?
Monday, 23 April 2018
Geek Trivia: The Original “Snake Oil” Was Effective At Treating?
The Best Smart Toothbrushes for Everyone

It’s easy to brush your teeth ineffectively by brushing too hard, too little, or not giving each area of your mouth the attention it dese…
Click Here to Continue Reading
Facebook’s ‘Download Your Data’ Feature Leaves Out a Lot
Is Bossing Around Alexa Teaching Your Kid To Be Rude?
Your T-Mobile Service Probably Got Better Last Week

If you’re on T-Mobile—or a smaller network that uses it, like MetroPCS or Google’s Project Fi—your cell service might’ve gotte…
Click Here to Continue Reading
How To Automatically Add Citations And Bibliographies To Microsoft Word
YouTube TV Review: Finally, Live Television Is Tolerable In the 21st Century

While Netflix creates a new way to watch TV online, regular live TV languished for years in an old distribution model.
Click Here to Continue Reading
The Best Podcast Clients for Android
How to Stop Ubuntu From Collecting Data About Your PC
The Best Free Screenshot Apps for Windows
Sunday, 22 April 2018
Geek Trivia: Which Game Company Produced, But Never Released, A VR Headset In The 1990s?
Organize Your Entire Video Game Collection in One Place with LaunchBox
Saturday, 21 April 2018
Geek Trivia: Coca Cola Almost Created A Vending Machine That Would Adjust Its Prices Based On?
Android Easter Eggs from Gingerbread to Oreo: A History Lesson
Electric scooters are causing havoc. This man is shrugging it off
Viacom’s Journey to Improving Viewer Experiences with Real-time Analytics at Scale
With over 4 billion subscribers, Viacom is focused on delivering amazing viewing experiences to their global audiences. Core to this strategy is ensuring petabytes of streaming content is delivered flawlessly through web, mobile and streaming applications. This is critically important during popular live events like the MTV Video Music Awards.
Streaming this much video can strain delivery systems resulting in long load times, mid-stream freezes and other issues. Not only does this create a poor experience, but can also result in lost ad dollars. To combat this, Viacom set out to build a scalable analytics platform capable of processing terabytes of streaming data for real-time insights on the viewer experience.
After evaluating a number of technologies, Viacom found their solution in Amazon S3 and the Databricks Unified Analytics Platform powered by Apache SparkTM. The rapid scalability of S3 coupled with the ease and processing power of Databricks, enabled Viacom to rapidly deploy and scale Spark clusters and unify their entire analytics stack – from basic SQL to advanced analytics on large scale streaming and historical datasets – with a single platform.
To learn more, join our webinar How Viacom Revolutionized Audience Experiences with Real-Time Analytics and AI on Apr 25 at 10:00 am PT.
The webinar will cover:
- Why Viacom chose Databricks, Spark and AWS for scalable real-time insights and AI
- How a unified platform for ad-hoc, batch, and real-time data analytics enabled them to improve content delivery
- What it takes to create a self service analytics platform for business users, analysts, and data scientists
Register to attend this session.
--
Try Databricks for free. Get started today.
The post Viacom’s Journey to Improving Viewer Experiences with Real-time Analytics at Scale appeared first on Databricks.
5 Reasons to Attend Spark + AI Summit
Spark + AI Summit will be held in San Francisco on June 4-6, 2018. Check out the full agenda and get your ticket before it sells out! Register today with the discount code 5Reasons and get 15% off.
Convergence of Knowledge
For any Apache Spark enthusiast, these summits are the convergence of Spark knowledge. Used by a growing global community of enterprises, academics, contributors, and advocates, attendees have convened at these summits since 2013 to share knowledge. And this summer attendees will return to San Francisco—to an expanded scope and agenda.

Expansion of Scope
Today, unified analytics is paramount for building big data and Artificial Intelligence (AI) applications. Because AI applications require massive amounts of data to enhance and train machine learning models at scale, so far Spark has been the only engine that combines large-scale data processing with the execution of state-of-the-art machine learning and AI algorithms in a unified manner.
So we have changed the name and expanded the scope of the summit to focus on and bring to you AI use cases and machine learning technology.
“AI has always been one of the most exciting applications of big data and Apache Spark, so with this change, we are planning to bring in keynotes, talks and tutorials about the latest tools in AI in addition to the great data engineering and data science content we already have” — Matei Zaharia.
For this expanded scope and much more, here are my five reasons as a program chair why you should join us.
1. Keynotes from Distinguished Engineers, Academics and Industry Leaders
Distinguished engineers and academics (Matei Zaharia, Dominique Brezinski, Dawn Song, and Michael I. Jordan) and visionary industry leaders (Ali Ghodsi, Marc Andreessen, and Andrej Karparthy) in the big data and AI industries will share their vision of where Apache Spark and AI are heading in 2018 and beyond.
New keynote speakers announced! Join @pmarca from @a16z at #SparkAISummit in June! https://t.co/L8YVD2VFrc pic.twitter.com/TM6xIXtEQM
— Spark + AI Summit (@SparkAISummit) April 5, 2018
2. Five New Tracks
To support our expanded scope, we have added five tracks to cover AI and Use Cases, Deep Learning Techniques, Python and Advanced Analytics, Productionizing Machine Learning Models, and Hardware in the Cloud. Combined with all other tracks, all these sessions will provide you with over 180 talks to choose from. And if you miss any sessions, peruse the recorded sessions on summit website later.
3. Apache Spark Training
Update your skills and get the best training from Databricks’ best trainers, who have trained over 3,600 summit attendees. A day dedicated to training, you can choose from four courses and stay abreast with the latest in Spark 2.3 and Deep Learning: Data Science with Apache Spark; Understand and Apply Deep Learning with Keras, TensorFlow, and Apache Spark; Apache Spark Tuning and Best Practices; and Apache Spark Essentials. Depending on your preference, you can choose to register for each class on either AWS or Azure cloud. Plus, we will offer half-day Databricks Developer Certification for Apache Spark prep course after which you can sit for the exam on the same day. Get Databricks Certified!
Have you checked out the trainings we are offering this year at #SparkAISummit? Register today to secure your spot: https://t.co/Mp3Yjm0Qsp pic.twitter.com/TzBHY7pUfA
— Spark + AI Summit (@SparkAISummit) March 14, 2018
4. The Bay Area Apache Spark Meetup
Apache Spark Meetups are reputed for tech-talks. At summits’ meetups, you learn what other Spark developers from all over are up to, mingle and enjoy the beverages and camaraderie in an informal setting, and ask burning questions.
5. City By The Bay
San Francisco is a city famed for its restaurants, cable cars, hills, Golden Gate Bridge, and vibrant nightlife. Take a breather after days of cerebral sessions, chill out at the Fisherman’s Wharf, visit MOMA, and much more…
We hope to you see you in San Francisco!
What’s Next
With only less than six weeks left, tickets are selling fast. If you haven’t yet, register today with the discount code 5Reasons and get 15% off.
--
Try Databricks for free. Get started today.
The post 5 Reasons to Attend Spark + AI Summit appeared first on Databricks.
The Best Pet Cams for Cats, Dogs, and Their Doting Owners

Pet cams are a convenient way to make sure your fur babies are safe (and staying out of mischi…
Click Here to Continue Reading
Which Synology NAS Should I Buy?
Friday, 20 April 2018
Geek Trivia: The First Television Remotes Communicated With The TV Set By?
Google and Carriers Plan to Replace SMS With a New Protocol Called ‘Chat’
Why Do Some Smartphones Use Multiple Cameras?
How to Disable the “Low Disk Space” Warning on Windows
How to Protect Your Privacy on Facebook
An Android User’s Take on the iPhone
AI researchers are making more than $1 million, even at a nonprofit
Facebook Is Using Dark Patterns To Undermine EU Privacy Rules
‘My Time at Reddit Made the World a Worse Place’ Says Former Head of Product
What’s the Difference Between Bitcoin, Bitcoin Cash, Bitcoin Gold, and Others?
Thursday, 19 April 2018
Six Of The Best In-Ear Noise Canceling Earbuds

Noise-cancelling earbuds deliver the benefits of noise-cancellation—so you can enjoy your commute or flight in peace—but in a tiny…
Click Here to Continue Reading
Geek Trivia: When Khrushchev Visited IBM’s California Facility In 1959, He Brought The Concept For What Home With Him?
Microsoft Kills OneNote Desktop in Favor of the Microsoft Store Version
Skill Blueprints Lets You Design Your Own Alexa Responses
The Best Shooter Games (For People Who Suck At Shooters)
The Best Smartphone VR Headsets

A full-scale VR rig is expensive, but you don’t need to spend a fortune to try out VR.
Click Here to Continue Reading
How to Send Web Pages from Chrome to Your Phone
How to Create the Perfect Facebook Cover Photo
The Best Ways to Send Money with Your Phone
Women in Big Data and Apache Spark: Bay Area Apache Spark Meetup Summary
In collaboration with the local chapter of Women in Big Data Meetup and our continuing effort by Databricks diversity team to have more women in the big data space as speakers to share their subject matter expertise, we hosted our second meetup with a diverse and highly-accomplished women in their respective technical fields as speakers at the Bay Area Spark Meetup at Databricks.
For those who missed the meetup, moderated by Maddie Schults and Yvette Ramirez, below are the videos and links to the presentation slides. You can peruse slides and view the videos at your leisure. To those who helped and attended, thank you for your participation and continued community support. And to those who wish to be part of the diversity, join your local Women in Big Data chapter—and be a champion.
Bringing a Jewel from Python World to the JVM with Apache Spark, Arrow, and Spacy
By Holden Karau
Just Enough DevOps for Data Scientists (Part II)
By Anya Bida
Creating Beautiful and Meaningful Visualizations with Big Data
By Shane He
What’s Next
Our next BASM will be held in June 2018 at Moscone Center in SF as part of the Spark + AI Pre-summit Meetup. If you are not a member, join us today. If you have not registered for Spark + AI Summit in SF, please do so now and use the BASMU code to get 15% discount. We hope to see you there.
--
Try Databricks for free. Get started today.
The post Women in Big Data and Apache Spark: Bay Area Apache Spark Meetup Summary appeared first on Databricks.
Minecraft’s Official Website Distributed Malware-Infested Skins
Alibaba confirms developing self-driving vehicles
Is 24/7 Professional Home Security Monitoring Worth It?
Wednesday, 18 April 2018
Geek Trivia: The Invention Of Which Of These Things Was Critical To The Rapid Adoption Of The Telephone?
The 5 Best Ultra Portable Photo Printers For Speedy Snaps

Thanks to our ever present smartphones it’s easier than ever to snap tons of photos.
Click Here to Continue Reading
You Can Still File Your Taxes Today
Amazon Will Sell TVs Inside Best Buy, As Long As You Don’t Mind Fire TV

Best Buy is already the best place to window show for the TV you’re eventually going to go buy on Amazon anyway.
Click Here to Continue Reading
Chrome Now (Sort Of) Mutes Auto-Playing Videos
Faster sailing on Blue Ocean 1.5.0
Hello, I am Jenn, the new Product Manager for Blue Ocean and Jenkins Pipeline at CloudBees. I am based out of the Seattle area and am excited to be working on Jenkins. :D We released version 1.5.0 of the Blue Ocean plugin late last week. If you’re using Blue Ocean, you’ll want to grab this update since it includes many improvements and bug fixes!
New Features
Blue Ocean now includes a user interface update to show the downstream jobs launched with the build step (link: JENKINS-38339)

With Blue Ocean 1.5.0, users can now Reorder Steps in the Blue Ocean Pipeline Editor simply by dragging and dropping steps to reorder them in the list of steps. (JENKINS-38323)
The "Artifacts" tab also now supports pagination, which allows developers to paginate through the Artifacts tab. Previously, this list was cut off at 100 entries. (JENKINS-43588)
Improvements
We were able to include two performance improvements in 1.5.0 which reduce the Dashboard loading time in Blue Ocean:
Support for viewing output for failed runs with no stages is also included in this release. For developers who have no stages/steps defined in their pipelines, they can now see the full log output for failed runs. This update helps with Pipeline debugging in Jenkins. (JENKINS-48074)
Further improving the log output for Pipeline Runs, 1.5.0 also improves viewing of long log output lines with wrapping. Previously, a single, long line of output in the log wouldn’t be fully visible in the log window. (JENKINS-49036)
Fixes
One notable bug fix we addressed in this release was that input steps in post directives would not properly prompt for input. By fixing JENKINS-49297 users of Declarative Pipeline with Blue Ocean can include input steps in their post directives.
The full detailed change log can be viewed on the Blue Ocean plugin page
Update Your Plugin
Plugin updates in Jenkins are available in the Plugin Manager Update Center. This page includes instructions for using the UI and CLI to update your plugins: https://jenkins.io/doc/book/managing/plugins/.
If you are using the Blue Ocean UI, click Administration in the page’s header to open Plugin Manager.
Installing the primary Blue Ocean plugin will update its dependencies as well.
Providing Feedback
-
Chat with us at Gitter: https://gitter.im/jenkinsci/blueocean-plugin
-
Report issues at https://issues.jenkins-ci.org/
How To “Silence” Your Noisy Mechanical Keyboard
How to Find Your Lost AirPods
When Will Your Phone Get Android Oreo?
Shutting Down Doesn’t Fully Shut Down Windows 10 (But Restarting Does)
Tube Is YouTube Without Any Attention Stealing Clutter
Not Getting macOS Notifications? Here’s How to Fix It (Without Rebooting)
Tuesday, 17 April 2018
Geek Trivia: Which Of These Things Is Paradoxically Difficult To Measure Accurately?
Best USB-C Docks for Your MacBook Pro

Newer MacBooks are sorely lacking in ports, but you can easily upgrade your experience with a full featured and flexible USB-C dock that not o…
Click Here to Continue Reading
Stop Using Six Digit Numeric iPhone Passcodes Right Now
Constant Blue Screens Delayed The Windows 10 April 2018 Update
GrubHub Will Let You Split the Check With Venmo Integration

GrubHub can save a party with a quick meal for everyone, but when it comes time to split the bill, it can be a nightmare.
Click Here to Continue Reading
How To Set the Default Font in Photoshop and Illustrator
The Best Keyless Lock For Every Need

Somewhere between a regular door lock and a full-blown smart lock, there sits a middle ground.
Click Here to Continue Reading
What Is a CSV File, and How Do I Open It?
How to Install Google Chrome on Ubuntu 18.04
Why It’s Nearly Impossible to Make Money Mining Bitcoin
How to Choose the Best VPN Service for Your Needs
Why Isn’t PBS Offered By Sling and Other Streaming Services?
Plover.io Quickly Shares Files Between Local Devices
How Much Electricity Do All Your Appliances Use?
Monday, 16 April 2018
Geek Trivia: Which Of These Parts Of A Chicken Help It Regulate Its Body Temperature?
The Best Slim, Low Profile iPhone X Cases (That Still Protect Your Phone)

Sturdy and bulky cases are great if you live a rugged lifestyle, but how about if you just need some slim and simple protection for your iPhon…
Click Here to Continue Reading
Voice Assistants Are Only Intuitive If You’re Already Tech Savvy
Sega Is Bringing 15 Classic Games, Including Sonic, to the Switch This Summer

There was a time when we thought that the Switch would be a perfect fit for the Virtual Console.
Click Here to Continue Reading
Headlines About ‘The Dark Web’ Are Usually Nonsense
How To Improve Your Aim in PC Games
Jenkins X Explained Part 1 - an integrated CI/CD solution for Kubernetes
Jenkins X is an opinionated platform for providing CI / CD on top of Kubernetes. We’ve chosen a set of core applications that we install and wire together so things work out-of-the-box, providing a turn key experience. This blog aims to build on previous introductions to Jenkins X and provide a deeper insight to what you get when you install Jenkins X.

So what happens? After downloading the jx CLI you will now be able to create clusters with public cloud providers or install onto an existing Kubernetes cluster.

This command will create a cluster on your cloud provider of choice.
> jx create cluster
Alternatively you can bring your own Kubernetes cluster and install Jenkins X on it:
> jx install
That said, we’ve found that creating a new cluster on a public cloud such as GKE is a lot way easier to start as we can be sure of the state of the cluster. For example we know that storage, networking and loadbalancers will be working as expected. Creating a cluster on GKE takes only a few minutes so it’s a great way to try things out as well as run your enterprise workloads.
For now lets assume we are using GKE. When jx create cluster has finished you will see some output in the terminal that also includes the default admin password to use when logging into the core applications below. There is a flag —-default-admin-password you can use to set this password yourself.
Accessing applications
We automatically install an Nginx ingress controller running with an external loadbalancer pointing at it’s Kubernetes service. We also generate all the Kubernetes Ingress rules using a golang library called "exposecontroller". This runs as a Kubernetes Job triggered by a Helm hook once any application is installed to the cluster.
Using "exposecontroller" means we can control all the ingress rules for an environment using a single set of configurations, rather than each application needing to know how to expose the kubernetes service to the outside world. This also means we can easily switch between HTTP and HTTPS plus support intregration with projects like cert-manager for auto generation of signed TLS certificates.
Environments
One important point to make is Jenkins X aims to use terminology that developers are familiar with. That’s not to say we are changing Kubernetes fundamentals, it’s more that if you don’t know Kubernetes concepts then we aim to help you still adopt the cloud technology and pull back the curtain as you gain confidence and experience. To that point, a core part of Jenkins X are "environments". An environment can have one or more applications running in it. In Kubernetes term an "environment" maps to the concept of a "namespace" in code.
The installation by default created three environments, this is customisable but by default we have a "dev", a "staging" and a "production environment". To list, select, or switch between these environments run:
> jx env
Jenkins X core applications
In the "dev" environment we have installed a number of core applications we believe are required at a minimum to start folks off with CI/CD on Kubernetes. We can easily add to these core apps using Jenkins X addons but for now lets focus on the core apps. Jenkins X comes with configuration that wires these services together, meaning everything works together straight away. This dramatically reduces the time to get started with Kubernetes as all the passwords, environment variables and config files are all setup up to work with each other.
-
Jenkins — provides both CI and CD automation. There is an effort to decompose Jenkins over time to become more cloud native and make use of Kubernetes concepts around CRDs, storage and scaling for example.
-
Nexus — acts as a dependency cache for Nodejs and Java applications to dramatically improve build times. After an initial build of a SpringBoot application the build time is reduced from 12 mins to 4. We have not yet but intend to demonstrate swapping this with Artifactory soon.
-
Docker Registry — an in cluster docker registry where our pipelines push application images, we will soon switch to using native cloud provider registries such as Google Container Registry, Azure Container Registry or Amazon Elastic Container Registry (ECR) for example.
-
Chartmuseum — a registry for publishing Helm charts
-
Monocular — a UI used for discovering and running Helm charts
Helm
We learned a lot in our early days with fabric8 on Kubernetes and there were some projects from the ecosystem that either weren’t around or (at the time) didn’t work with OpenShift, therefore we were restricted when making some design decisions. A couple of years on and now with Jenkins X we were able to look at other OSS projects that have been flourishing, so I was very happy to start looking at Helm. Helm is a package manager for Kubernetes and allows easy installation and upgrades of applications.
It was pretty clear that for Jenkins to evolve and include deployments to the cloud we should embrace Helm and provide an opinionated experience that helps teams and developers. The core applications mentioned above means Jenkins X provides an out of the box integrated CI/CD solution for Helm.
We know that helm has limitations but with the work on Helm 3, the focus of the Kubernetes sig-apps group, the Kubernetes community and investment we see from key organisations such as Microsoft, we feel Helm is currently the best way to install and upgrade applications on Kubernetes.
GitOps
We mentioned earlier that we setup three environments by default. What this means is for the staging and production environments we created:
-
Kubernetes namespace
-
An environment resource (CustomResourceDefinition) in the dev environment which includes details of how applications are promoted to it and includes various team settings.
-
A git repository that we store what applications and their versions should be present in that environment. These are stored in a Helm requirements.yaml file
-
A Jenkins Pipeline job: explained in more detail below
CI/CD for Environments
Having a Jenkins Pipeline Job for each environment means that Pull Requests to the git repo trigger a CI job. For now that job performs basic validation but in the future will include ‘gates’ to ensure a change to that environment has passed expected checks such as QA tasks, gain enough approvals from the correct people, etc - YES CI for environments!!
Once CI checks have passed the new application or version change can be merged. Only users that have karma can merge the Pull Request and therefore we get RBAC plus traceability for our environment deployments.
This means every application manifest, their version and configuration including storage requirements, resource needs and secrets for your environments are stored in Git repositories. Given a disaster recovery scenario this is exactly what you want.
Did I just say secrets in Git? Yes! We will be providing a nicer experience to helps folks get set up but we ourselves encrypt our secrets and store them in Git, then decrypt them when we come to install and upgrade.
Here’s our Git repo https://github.com/jenkins-x/cloud-environments/blob/a1edcc6/env-jx-infra/secrets.yaml.
We do all this with the help of a Helm wrapper called helm secrets. I’m working on a followup blog post with examples, better explanations and how to guides + add better integration with JX in the coming weeks.
Fancy getting involved?
We mainly hangout in the jenkins-x Kubernetes slack channels and for tips on being more involved with Jenkins X take a look at our contributing docs
If you’ve not already seen it here’s a video showing the create cluster explained in this blog.
How to Use Samsung Smart Switch to Update Your Galaxy Phone
iPad 2018 Review: Why Didn’t I Try an iPad Sooner?

I’ve used a lot of tablets. Android, Chrome OS, Kindles, even Windows.
Click Here to Continue Reading
How to Remove Google From Your Life (And Why That’s Nearly Impossible)
How to Hide a Recovery Partition (or Other Drive) in Windows
Sunday, 15 April 2018
Geek Trivia: Which Of These Pine Tree Species Require Forest Fires To Propagate?
5 Reasons Kodi Users Should Just Switch To Plex Already
Saturday, 14 April 2018
Geek Trivia: Sunsets Over The Ocean Will Rarely, But Curiously, Produce A Flash Of?
How to Remove Facebook from Your Life (And Why That’s Nearly Impossible)
Check Out How Google, Amazon, and Facebook Looked Decades Ago
SpaceX's valuation climbs to $25 billion with new funding round
Coinsecure announces Rs 20 lakh bounty to help them recover lost bitcoins
How Apple’s New Business Chat Works
Friday, 13 April 2018
Geek Trivia: Which Of These Popular Novels Began Life As A Radio Show?
Jenkins X making awesome progress after 24 days
Its been 24 days since we announced Jenkins X, a CI/CD solution for modern cloud applications on Kubernetes. I’m truly blown away by the response and feedback from the community - thank you!
We’ve also had lots of folks report they’ve successfully used Jenkins X on a number of clouds including GKE, AWS and AKS along with on premise clusters which is great to hear!
Here’s a brief overview of the changes in the last 24 days from the Roadmap:
-
we now fully support GitHub and GitHub enterprise. BitBucket cloud and gitea is almost there too. Hopefully BitBucketServer and Gitlab are not too far away either. For more detail see supporting different git servers
-
For issue tracking we support GitHub, GitHub Enterprise and JIRA. For more detail see supporting issue trackers
-
Gradle support is now available from jx create spring or by importing gradle apps
-
Go, Node and Rust build packs are now available with more planned
Also we’ve made it a little bit easier to keep your jx binary up to date continuously. Just type one of the following:
-
link:http://jenkins-x.io/commands/jx_version/[jx version]will prompt you if there is a new version available and if prompted, it will upgrade itself -
link:http://jenkins-x.io/commands/jx_upgrade_cli/[jx upgrade cli]will upgrade thejxbinary if its available orlink:http://jenkins-x.io/commands/jx_upgrade_platform/[jx upgrade platform]for the platform
For more detail on the changes over the last 24 days with metrics please see the changelog generated by Jenkins X
We’d love to hear your feedback what you think of Jenkins X and the Roadmap - please join the community.

