Registration, Breakfast, expo, and networking
Oct 28 2015
Cole Crawford, CEO, Vapor IO Dan Pitt, Executive Director, Open Networking Foundation Roger Strukhoff, Principal, Tau Institute
IT pros representing different industries reveal learning's from implementing open-infrastructure solutions and provide insights for shops evaluating it. To be discussed: a) defining open infrastructure, b) when/why to consider; c) implications of open infrastructure and likely impacts on existing IT processes, resources; d) new skills requirements; e) impact of open on the current I&O management model; f) serviceability; g) financial considerations, planning and understanding.
John Gromala, Senior Director, Hyperscale Product Management, HP Reaz Rasul, Vice President & General Manager, Global Hyperscale Business, Hewlett-Packard
How to build Cloud Services infrastructure by leveraging emerging Open technology standards to reduce operations risk.
Greg Pettine, Director Business Development, Data Center Service Provider Group, Schneider Electric Joseph Ryan, Director of Data Center Development for the Americas , Microsoft Mark Thiele, EVP, Data Center Technology, Switch SUPERNAP Richard Donaldson, Director Infrastructure Mgmt & Operations, eBay
Dr. Rajat Ghosh, Founder and CEO, AdeptDC
Deploying automated preventive maintenance for distributed IT system, real-time monitoring and optimization, just-in-time resource allocation, and rapid change management. This session will initiate an open discussion about the different problems that can be solved using AI technology.
Kyle Julian, Data Center Application Director, S&C; Electric Company
Darren Wu, Sales Manager, Telehouse Shanghai
Anthony Rossabi, EVP, Marketing, Sales & Strategy, Telx Drew Leonard, VP Colocation Management, CenturyLink Eli Scher, CEO, New Continuum Data Centers Hunter Newby, CEO, Allied Fiber Thomas Roloff, SVP, EMC Global Services
When considering expansion or consolidation of data center or cloud provisioning as a third party provider what are the various benefits and disadvantages to pursuing growth with an homogenous IT/cloud infrastructure offering or providing “add-on” capabilities and services to your prospective end user clients? What do clients want?
Cole Crawford, Chairman, StackingIT Peter Judge, Global Editor, DatacenterDynamics Roger Strukhoff, Principal, Tau Institute
Data is doubling every three years, putting pressure on data center hardware and software to keep up. Only open innovation can provide the path for technology providers to maintain pace with data hypergrowth. As a community-based initiative that measures and ranks the openness and leadership of the software and hardware that is transforming data centers, the Open Performance Grid (OPG) has been created to enable and facilitate the required technological progress.
Andy Lawrence, Research Vice President - Datacenter Technologies & Eco-Efficient IT, 451 Research Don Beaty, President, DLB Associates Gary Rackliffe, Vice President, Smart Grids North America , ABB Mukesh Khattar, Technical Executive, Data Centers, EPRI Peter Gross, VP, Mission Critical Systems, Bloom Energy
Peter McCallum , Vice President, Datacenter Solutions , FalconStor
Let’s face it; embracing new storage technologies, capabilities, and upgrading to new hardware often results in added complexity and costs. The reality is that when IT equipment, platforms, and applications do not integrate with one another, the resulting “sprawl” of storage islands and silos on disparate systems can be costly, risky, disruptive, and time-consuming. But it does not have to be that way. A software-defined approach eliminates lock-in AND lock-out, while reducing cost and complexity.
Delivering a unified approach to security is essential to a robust security strategy. But how is this possible when running diverse proprietary applications for clients who are all configured in different ways, whose data you may not have access to or visibility of, and hence cannot fully assess the vulnerabilities? How do you ensure interoperability and compatibility of security solutions when faced with different vendors and applications?
Andree Jacobson , Consultant, AQUILA Bob Bolz, HPC & Business Development, AQUILA Phil Hughes, CEO, Clustered Systems Spencer Lail, Principle Technology Evangelist, Brocade
Many of today’s Hyperconverged data centers are beginning to more closely resemble extreme scale HPC installations. The addition of high bandwidth low latency interfaces do provide new challenges to provisioning and managing resources, but can provide a scalable, flexible resource for offering broad data center as-a-service features including real HPC.
Don Beaty, President, DLB Associates Gustav Bergquist, CTO, Bahnhof Peter Gross, VP, Mission Critical Systems, Bloom Energy Peter Judge, Global Editor, DatacenterDynamics
Historically, "sweating the assets" had more to do with maxing out a facilities capacity (primarily power) and driving into its lifecycle-end ground without incurring enormous capital costs of UPS or HVAC chiller replacement/upgrading. Is that now outdated thinking? If you build a 5mW data center, the full infrastructure stack ought to be designed to get the greatest possible performance from that finite power resource. And it ought to be designed such that it can be agile-y upgraded -- both IT and facilities. This allows for evolutionary new-tech-driven performance improvement. So who's doing that now? And what's their planning secret?
Calista Redmond, Director, OpenPOWER Global Alliances, IBM
Matt Jayjack, Director of Product, MCIM Michael Dongieux, Principal, MCIM
Andrew Cencini, VP Engineering , Vapor IO
Gustav Bergquist, CTO, Bahnhof
Almost all electricity used in a data centre is transformed into heat. Today, standard the procedure is to remove this heat into the atmosphere. This is a waste of the earth’s resources, a waste of energy, and a waste of money. Today a data centre is considered “green” if the electricity used is produced from renewable sources. The question is, what should we do with the heat that is produced within the data centre? Fortum has, together with Bahnhof and a few other companies, developed and proven a concept to recycle excess heat from data centres. The audience will learn about how to use excess heat from data centres to warm up buildings and hot water across Stockholm, thus turning a cost into revenue, higher efficiency, and a better business and environment. At the same time the data centre gains an extra redundant cooling system!
Kyle Tessmer, Service Representative, Uninterruptible Power Supplies Division , Mitsubishi Electric Power Products
As technology seemingly advances by the minute, a failure in your UPS equipment costs your company more now than ever. UPS service can come from a variety of different vendors and be difficult to differentiate. Between the varying service contract levels, scope of work, and additional services – it’s easy to lose track of exactly what you purchased. Service requirements and a simple guide of recommended services will be provided to make sure your UPS runs efficiently.
The Transformation from Vendor-led to Open-based, Disaggregated, Data Center and Cloud Infrastructure. Senior-most executives from leading open-source organizations reveal the profound implications of the changing game (and business). These experts GPS-locate us on the roadmap from vendor-driven, closed, proprietary, monolithic technology to open-sourced, knowledge-community-designed, distributed technology. Why is this important? Cost, transparency, efficiency, agility, speed, performance, and resilience. These expert crowd-sourced innovation engines are rewriting the rulebooks and the economics for the provision of data center and cloud technology services.