IBM announced this week that it is working with Continuum Analytics to offer Anaconda, an Open Data Science platform, on IBM Cognitive Systems. Anaconda will also integrate with the PowerAI software distribution for machine learning and deep learning that makes it simple and fast to take advantage of Power performance and GPU optimization for data intensive cognitive workloads.
IBM developed PowerAI to accelerate enterprise adoption of open-source machine and deep learning frameworks used to build cognitive applications. PowerAI reduces the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture and is tuned for high performance.
With PowerAI, clients can realize the benefit of enterprise support on IBM Cognitive Systems HPC platforms used in the most demanding commercial, academic and hyperscale environments.
These Cognitive Systems are built with IBM’s POWER8 leveraging NVIDIA’s high-speed NVLink interface to NVIDIA’s Tesla Pascal P100 GPU accelerators. The CPU to GPU and GPU to GPU NVLink high bandwidth connections give a performance boost to deep learning and analytics applications. The CPU to GPU NVLink interface is available on POWER8 CPUs.
The Anaconda platform brings capabilities for large-scale data processing, predictive analytics, and scientific computing to simplify package management and deployment. Developers using open source ML/DL components will now be able to use Power as the deployment platform and take advantage of Power optimization & GPU differentiation for NVIDIA.
In addition, there continues to be growing support for the OpenPOWER Foundation. The Foundation recently announced the OpenPOWER Machine Learning Work Group (OPMLWG).
The new group includes members like Google, NVIDIA and Mellanox to provide a forum for collaboration that will help define frameworks for the productive development and deployment of machine learning solutions using OpenPOWER ecosystem technology. The foundation has also eclipsed 300-members, with new participants such as Kinetica, Red Hat and Toshiba.
“Anaconda is an important capability for developers building cognitive solutions, and now it’s available on IBM’s high performance deep learning platform,” said Bob Picciano, senior vice president of Cognitive Systems. “Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale.”
“With more than 16 million downloads to date, Anaconda is empowering leading businesses across industries worldwide with tools to identify patterns in data, uncover key insights and transform basic data into a goldmine of intelligence to solve the world’s most challenging problems,” said Travis Oliphant, co-founder & chief data scientist, Continuum Analytics. “By optimizing Anaconda on Power, developers will also gain access to the libraries in the PowerAI Platform for exploration and deployment in Anaconda Enterprise.”
As one of the fastest growing fields of machine learning, deep learning makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. Deep learning is transforming the businesses of consumer web and mobile application companies, and it is being adopted by more traditional business enterprises as well.
Last month, IBM announced a new container service on Bluemix, its cloud platform, to fuel the speed and simplicity at which developers can build and manage more secure and cognitive apps. Available on IBM Cloud, this service uses Kubernetes, an open-source container orchestration system leveraging a Docker engine.
Now available in beta, Bluemix Container Service expands IBM’s commitment to and leadership in open technologies. As a contributor to both Kubernetes and Docker projects for over three years, IBM has helped to create and mature container technology. Bluemix itself is one of the few major cloud platforms built on a container-native foundation, which has enabled developers to build and ship code with containers since its launch in 2014.