The flights are set, the rooms are booked, and VCE is ready to head to the sunshine state for Cisco Live next week. We’ll have a strong presence this year, with customer presentations, executive sessions, and demonstrations at several booths throughout the show.Along with catching the Cisco Live keynotes and super sessions, you won’t want to miss Columbia Sportswear’s Michael Leeper on Tuesday at 3 pm ET. His presentation, “Virtualizing SAP and our Hybrid Cloud Future,” will discuss how Columbia Sportswear has virtualized almost 100% of their enterprise workloads, including SAP, PeopleSoft, and Microsoft SQL, by leveraging our Vblock Systems.VCE’s chief technology officer Trey Layton is featured in two sessions this year. He’ll discuss how converged infrastructure and our partnership with Cisco plays into hot industry trends, such as mobility, big data, cloud computing, as well as “Software Defined Everything” – including software defined networks and software defined data centers.Tuesday at 8:00 am ET – Real Software Defined Networking Solutions: Optimize in an Open Network Environment – CiscoONETuesday at 4:45 pm ET – Cisco’s Partner Super Session – Fireside Chat with Cisco’s John Graham’Attendees wanting a deeper dive into our technology should catch George Viebeck’s presentation at Cisco’s Solutions Theater (Cisco’s booth) at 2 pm ET on Tuesday, as well as our demonstrations of workload mobility automation at the Cisco Unified Data Center booth #1166, EMC booth #1031, and VMware booth #1401.Lastly, make sure you stop by VCE’s booth #1011 to catch presentations from Columbia Sportswear, CSC, and Xerox, as well as test drive our recently announced software and hardware for data center environments, including:VCE Vision Intelligent Operations softwareVblock System 100 for remote office/branch office deploymentsVblock System 200 for mid-sized data centersVblock System 300 and Vblock System 700 family enhancements for greater configuration flexibility in optimizing compute, networking, and storage resourcesVblock Specialized Systems for SAP HANA® softwareWe hope to see you there!Keep your eyes peeled for VCE’s giveaways! Tweet a picture with a VCE Vblock™ System with #VblocksRock for a chance to win VCE Vblock™ Building Block kits and a LEGO™ Mindstorms™ NXT 2.0 kit.ShareLEGO and LEGO Mindstorms are trademarks of the LEGO Group of companies which does not sponsor, authorize or endorse this contest.
The official starting point of the Boston Marathon, the oldest Marathon in the world, is Hopkinton, MA where EMC’s corporate headquarters also sits. The area’s many colleges and universities make Boston an international center of higher education and medicine and a world leader in innovation. It is no wonder then that MassChallenge, the largest-ever startup accelerator, and the first to support high impact, early-stage entrepreneurs with no strings attached, was established here in the “Innovation Waterfront District.” MassChallenge launched a program in Israel with the support of EMC in the spring of 2013 – over 100 startups and 60 judges participated. The program connects the most promising early-stage companies with the resources and networks at the heart of MassChallenge’s Boston accelerator.In the announcement, John Hawthorne, Founder & CEO of MassChallenge said, “MassChallenge Israel features a deep infrastructure of supporters and resources within Israel, strong connections between Israel and Boston, and exclusive opportunities for Israeli startups in Boston… Boston and Israel have had a special relationship for a long time that has generated close collaboration on both business and technological fronts.” MassChallenge selected Israel as its first international location because of the country sits at the cutting edge of technology and entrepreneurial activities globally, and because MassChallenge is eager to enable top startups to scale quickly and effectively.”Understanding the power of this partnership, I partnered with John and his MassChallenge team to create new opportunities for partnerships between our organizations. Initially, I drove EMC‘s global participation in the MassChallenge judging process and inspired local leaders to mentor the 128 finalists.Subsequently, I established and hosted a new program called “Technology Directions: Keeping It Real.” Our first event, held in May 2013 was “Open Source Software – What’s the Buzz All About?” Speakers included leaders from RackSpace and two MassChallenge start-ups: AppSembler and Profit Bricks.Thinking outside the BoxI am not the only one working to extend and grow EMC’s partnership with MassChallenge. EMC’s Executive Business Center (EBC) Director Bernie Baker and his team partnered with me and Steve Todd, EMC VP and Fellow, to develop an EBC customer innovation tour at MassChallenge’ s corporate headquarters. AIG Israel was the first to tour with us and their Israel CIO reflected that “EMC is a brilliant company that is stimulating new initiatives.”But it is not only about customers, it is also about employee enthusiasm and knowledge sharing. Thus, we created an EMC Innovation Ambassador program for employees with different skills, talents, and roles to share their personal innovation stories with customers and partners. The mission of the Innovation Tour is to “envision an inspired creative community where EMC, our customers, and our partners, can collaborate globally around innovation.”MassChallenge Keeps GrowingWhat is truly exciting is to see MassChallenge growing. Some of its supporters include US President Barack Obama, Massachusetts Governor Deval Patrick, venture capitalist Desh Deshpande, Dr. Josh Boger of Alkeus Pharmaceuticals, the Kraft Family, and top brands like Fidelity, Microsoft, and Verizon.Since starting in 2010, the 489 MassChallenge alumni companies have raised $472 million in funding, generated nearly $200 million in revenue, and created nearly 4,000 jobs.In December 2013, MassChallenge and EMC UK & Ireland President James Petter met with UK Prime Minister David Cameron to discuss the launch of a program expansion in the UK. In London, MassChallenge will engage hundreds of startups, expert mentors, investors, students, and business leaders from across the UK, Europe, and the world – placing London at the center of the global innovation map.It has been less than a year since I began engaging with MassChallenge. We have traversed continents to bring the spirit of entrepreneurship outside the walls of our corporate campus in order to catalyze a connected community of innovation and thought leadership.To learn more about MassChallenge, visit their homepage at www.masschallenge.org and consider reading the 2013 MassChallenge Impact Report or reviewing the 2013 winners.
The digital economy has been a key agent of change in Latin America. This is especially true in Rio de Janeiro, Brazil. As Rio prepares for the 2014 FIFA World Cup and the 2016 Summer Olympic games, it is embracing technology and real time data management to ensure success.The forces of Mobile, Social and Big Data are redefining a whole city and the major events that will take place there. EMC is taking a leading role to help enable this transformation.Rio de Janeiro’s Digital Economic GrowthThinking beyond FIFA and the International Olympic Committee, Rio is taking advantage of its world-class appeal and these major events to attract large global organizations, embrace local academia, and promote a start-up friendly business environment.Major construction and development often poses challenges related to transportation, security and healthcare. Rio is addressing many of these challenges by leveraging technology in the most innovative manner.The Rio Center of Operations is one example. By converging huge amounts of live data from a multitude of traffic and surveillance cameras, social networks and a variety of other data sources, the Center of Operations mobilizes the resources at its disposal to quickly remediate critical situations.The government is not the only entity taking advantage of the growth of mobile data and infrastructure. This modern approach is helping create a greater sense of community between “cariocas” (the people of Rio). Startup companies are sprouting to offer enhanced mobile services that help to facilitate better life experiences to millions of people that call Rio home.The numbers that represent the digital economy are impressive. For example, Brazil has the fourth highest Internet usage in the world with 86 million users and the second largest national population on Facebook. In addition, revenue generated via e-commerce is estimated to be worth $12B annually.This leads to a greater technology-enabled population relying upon mobile information and new social platforms as a way to transform their economy and society altogether.EMC Invests in Rio de JaneiroIn May, EMC will inaugurate a new R&D facility at the Federal University of Rio de Janeiro (UFRJ). This facility will be located within the Technology Park on the UFRJ campus, joining Siemens, Schlumberger and Halliburton.EMC’s Brazil R&D Center (BRDC) will provide a permanent home to the team of data scientists that have been working on relevant Big Data projects for the last 18 months such as joint research with several oil & gas companies, data repositories, industry technology providers and even the Rio government.An example of a recent breakthrough is the work the BRDC has done on seismic data compression. Long thought uncompressible, the BRDC has created new algorithmic methods that significantly compress these terabyte files to just a fraction of their original size. This is very important for an industry that is struggling to bring more and more data online for advanced interpretation and analysis. Patents should be filed this month with field trials to commence shortly after.The first EMC Executive Briefing Center (EBC) in Latin America will be located within the R&D Center and will support all Latin American business interests. EBCs around the world help EMC customers tackle their hardest IT transformation questions, offering access to EMC subject matter experts, executives, and engineers through collaborative discussion sessions. EMC strives to accelerate the impact of our investments by offering proven, world-class customer engagements that leverage 20 years of success and experience as a global offering.The city of Rio is a real-life laboratory where Big Data is a force of change. It is an honor for us at EMC to be part of a quest to redefine as the world focuses attention on Rio during the next several years.For more about the BRDC, follow @kbreitman and @timvoyt on Twitter.
I have to be honest. Much of what I predict to be significant for the data storage industry in 2018 may go unnoticed by most IT professionals, even though some of these concern major progress that hasn’t been experienced in nearly a full generation!So much has been happening in the development of exciting new storage-class memory (SCM) that will help make arrays faster and more efficient in processing general purpose workloads, and this year I believe we’ll see a tangible payoff with the mainstreaming of this technology. This big news is all about Non-Volatile Memory Express (NVMe) and the emergence of this highly anticipated interface within commercial storage arrays. With Artificial Intelligence and Machine Learning seemingly all the rage in IT these days, I expect to see emerging and increasingly practical use cases coming to commercial storage offerings that attempt to further automate storage management, taking concepts such as intelligent data placement and performance tuning to the next level. And momentum is building for purpose-built data center hardware that provides optimized data paths designed to accelerate specialized workloads.But enough with the introductions, let’s dive into the details!Prediction #1: NVMe Will Go From Promise to Productivity in Commercial StorageA combination of lower cost components being produced in larger volumes by multiple suppliers and a fully-developed specification will finally propel NVMe towards mainstream adoption in enterprise storage. With both storage and servers NVMe-enabled, enterprises will be offered compelling choices that take advantage of a more efficient hardware/software stack that delivers a performance benefit trifecta: low latency, reduced CPU utilization and faster application processing to accelerate performance over that of non-NVMe flash SSDs. While NVMe-enabled storage arrays will get an initial boost in performance, the true potential for NVMe will be realized later in 2018 when next-generation SCM becomes available (more on that below).Although NVMe-based flash drives and a handful of storage arrays have been offered for several years, they have typically been 30-50% more expensive than equivalent All-Flash arrays. At this kind of premium, the jury was out on whether these NVMe products were worth the price, since most enterprises wouldn’t have noticed a cost-effective difference in aggregate for general purpose workloads. However, that is changing. Technology maturity means volumes are rising, component costs are coming down and multiple NVMe SSD suppliers are finally ready for prime-time.We’re only at the beginning of the NVMe innovation cycle. This overnight success has been 10 years in the making, with both Dell and EMC playing prominent roles along the way. In 2018, our industry vision, intellectual property investment and long-term strategic supplier partnerships will pay off. Although a few proprietary NVMe storage products were launched in 2017, broad mainstream solutions will require long-term commitment and dedicated investment to keep up with the latest advances in flash and SCM. We’re ready.Another underappreciated aspect of NVMe is lower CPU utilization. The NVMe software stack executes fewer instructions. It’s highly optimized for the parallelism of contemporary multi-core processors. With lower CPU utilization for storage, you will have more of your server available to run your applications, which translates to better TCO, improved infrastructure efficiency and software license cost reduction. This kind of performance advantage will be highly sought after by organizations running OLTP and real-time analytics.Prediction #2: NVMe Over Fabrics (NVMeOF) Will Continue to Emerge and DevelopHardly anyone will be adopting NVMeOF for production until the industry gets a rich, interoperable set of ecosystem components. However, we will see incremental progress in 2018, first with NVMeOF Fibre Channel for the incumbent SAN, and then with NVMeOF Ethernet solutions for next-gen data centers. It’s all in line with the development of new interfaces and new storage protocols, but none of it will happen overnight. We need the ecosystem to come along, with new switch ports, software stacks, new host bus adapters (HBAs), etc. In order for adoption to grow, all these factors will need to be developed into a full-fledged ecosystem. As a practical example, at Flash Memory Summit 2017, there must have been a dozen different NVMeOF solutions announced, and I’d guess that no two interoperated with one another. That just reflects the current state of development. I’m a believer and a champion, but it’s early days still. When NVMeOF does hit primetime, watch out. Vendors who do the homework to vertically integrate NVMe in their network, compute and storage products will be at an advantage to offer an incredible performance package for organizations looking to super-charge their SANs. Early adopters will likely be in the HPC, scientific and Wall Street high-frequency trading domains, though enterprises upgrading to modern data centers running cloud-native, IoT and AI/ML applications won’t be far behind.Prediction #3: Storage Class Memory (SCM) for Storage Will Become a Reality in 2018Our industry has largely been about spinning hard drives and dynamic random-access memory (DRAM) forever. In 2008, Flash came in and pushed out hard drives as the leading storage media type. In 2018, for the first time in a generation, there are several viable emerging memory candidates in this space. First is Intel with 3DXP. They dipped their toe in the water last year and 2018 is when it becomes a part of mainstream storage architectures. This new SCM should operate in the 10-20 microsecond realm instead of the 100-200 microsecond range for flash. This 10x performance improvement will manifest as both storage cache and tier to deliver better, faster storage.Of course, low latency applications, such as high frequency trading, will benefit tremendously from SCM. However, SCM is not just for the top of the pyramid workloads and lunatic fringe; the average enterprise will benefit anywhere the equation “Time = Money” comes into play. SCM will be leveraged for real-time risk management – at any time, your most important data needs to be accessed at the lowest possible latency. And it’s still just the beginning. We don’t get to see a completely new media technology every day. As Pat Gelsinger, CEO of VMware, once said, “There have only been four successful memory technologies in history and I’ve seen over 200 candidates to become the fifth.” The fifth is here, and there are more to come.Prediction #4: Advancements Will Be Made Around Artificial Intelligence and Machine Learning (AI/ML) in StorageAs an industry, we have been using machine learning techniques to tier data and implement unique solutions in storage for years. Take for example the “Call Home” functionality in VMAX. Our products send regular/high frequency telemetry of all aspects of our storage platforms to Customer Service. This data is analyzed for patterns and anomalies to proactively identify situations before they become problems. We’re flattered that this approach has been imitated by others, such that now it is a best practice for the industry. Another win for customers.For 2018, we’ll be seeing AI and ML integration accelerate. Intelligent data tiering will go finer-grained; we’ll see management of more types of media, such as SCM for example – and we’ll be designing in the use of new forms of hardware acceleration to enable that. We will adapt and adopt the latest innovations from the semiconductor processor world, such as graphics processing units (GPUs), tensor processing units (TPUs) or field-programmable gate arrays (FPGAs) to enable autonomous, self-driving storage.New array applications for AI/ML capabilities will come into play in different ways. Consider the array dynamics when a new workload comes along. AI/ML spring into action, drawing upon telemetry from not only this particular array, but from the cloud-based analysis of all similarly configured arrays to derive the optimal configuration to accommodate this new workload without impact to existing applications. AI/ML turns the global experience pool of all workloads on all storage arrays into an automated tuning subject matter expert. Today’s capabilities of CloudIQ and VMAX Call Home are just the beginning. Eventually the idea is that we’ll be able to use cloud-based AI and ML to fully automate the operation of the storage. This will mean storage systems that do more of the data management themselves, enabling organizations to shift dollars away from today’s IT maintenance budgets over to tomorrow’s Digital Transformation initiatives.Prediction #5: Pendulum Will Swing Back to Heterogeneous Infrastructure to Accommodate Specialized WorkloadsFive years ago, the typical view of the data center would be centered on row upon row of identical x86 servers. These servers, automated and orchestrated as a cloud, delivered a standardized set of IaaS and PaaS capabilities for the vast majority of IT workloads. Today,we’re seeing rapid growth in a new class of algorithmic workloads that are often better suited for specialized processors rather than general purpose homogeneous hardware. Purpose-built hardware often runs these workloads significantly faster and consumes an order of magnitude less power than general purpose compute running software-only solutions. This means that optimized infrastructure architectures will need the ability to deploy business solutions that take advantage of rapid advances in algorithmic processor technology, while keeping all the software-defined flexibility and agility of hybrid and private clouds. Think of this as “software-defined control plane meets hardware-optimized data pipeline.” This architecture may exploit GPUs for machine learning, FPGAs for custom functions, and offload engines for algorithmic data services such as dedupe, compression and encryption.These raw materials eventually become services, delivered across a low-latency datacenter fabric. This forms the architectural substrate for truly composable infrastructure. Our job will be to manage and orchestrate the dynamic provisioning of those IT services. We’ll start seeing these new capabilities delivered as POCs in 2018.2018 will be an exciting time for IT infrastructure and our customers. The contemporary IT Architect will have an unprecedented set of capabilities at their disposal, from new processing models, a new class of storage media, and advances in system and data center interconnects. This is especially the case in storage. With major technology advancements in both capability and affordability on the horizon, 2018 will be a year where we can truly expect to do more for less.
Dell EMC Data Protection for Microsoft Azure StackCloud computing is probably the most cost-efficient method to use, maintain, and upgrade your IT infrastructure. Azure Stack brings the agility and fast-paced innovation of cloud computing to on-premises environments. Only Dell EMC offers a complete portfolio of Data Protection solutions for both traditional and emerging workloads no matter where customers are in their cloud journey.Azure brings an entirely new way of doing business in the cloudAzure brings an entirely new way of doing business in the cloud. The Azure ecosystem consists of redundant data centers located literally around our globe. In most cases, Azure can provide you with far better security, performance, and reliability than you can provide on-premises.Microsoft Azure is a comprehensive collection of cloud services intended to provide developers and IT professionals the ability to build, deploy, and manage workloads leveraging a wide variety of development and DevOps tools and offering an extensive marketplace of offerings with which to build applications and solutions. However, as extensive as Azure public’s offerings are there are still a number of barriers that can prevent organizations from adopting a strictly public cloud model. Considerations such as regulatory compliance, data sovereignty, or a variety of edge cloud or disconnected use cases, just to name a few, can be drivers for customers to pursue a Hybrid Cloud model. Microsoft has recognized these challenges and answered them in the form of Azure Stack. The goal of Azure Stack is to provide an Azure consistent set of services and tools allowing Developers and IT professionals to leverage the same tools and methodologies in an Azure consistent fashion regardless of where an application is deployed (Public, on-prem, or hybrid).Benefits: There are three core benefits when leveraging Microsoft Azure and Azure Stack.Consistent Application Development – Developers have a true “write once deploy anywhere” model based on a consistent set of tools and processes.On-premises Azure Services – Organizations can adopt a cloud computing model on their own terms to meet both their technical and business challenges in a hybrid model without changing tools or methodologies based on deployment locale.Integrated Hybrid Delivery Model – Allows IT organizations to transform operations to focus on delivering cloud services predicated on integrated systems designed to deliver consistent Azure services in a predictable manner.Protecting Your Investment: Now that you have the power of a truly hybrid Azure ecosystem at your disposal, how are you protecting your investment?What is data protection in the cloud? And how do you choose the best backup? This isn’t an easy question to answer since it comes in various forms and the tools and technologies for data protection are extremely numerous and can be used in different combinations. A large number of choices can make cloud more difficult than traditional schemas. Still, we can simplify the challenges that appear to be complex with one solution from Dell EMC.Dell EMC recently tested the protection of databases and file systems running on virtual machines inside the Azure Stack to data protection that was running outside of Azure Stack on the customer network. To that end, planning around network settings and security is required. Outside of Azure Stack, network configuration for routing traffic to the internal Azure Stack network(s) will need to be planned and configured. Inside, Azure Stack network security groups assigned to each virtual machine need to be configured to allow inbound and outbound network traffic on specific ports depending on the data protection solution being used (virtual editions of Avamar, NetWorker, Data Domain, etc.).When you build modern applications across hybrid cloud environments, Dell EMCs architecture and industry-leading deduplications result in a lower overall Total Cost of Ownership (TCO).Architecture Matters: Dell EMC data protection solutions are architected to offer customers economic benefits through industry-leading, highly efficient data deduplication. Three core supporting technologies are:Variable-Length Deduplication.Dell EMC’s advanced dedupe enables the Data Domain platform to better align incoming data structures to determine what data is unique. It produces greater data reduction compared with fixed-length architectures, which results in a much more scalable protection storage pool, helping to simplify management and lower Azure storage costs. Plus, Dell EMC provides global deduplication across sites and allows you to backup and replicate non-Azure Stack resources as well.Data Domain Boost.With DD Boost software, only unique data has to be sent from client devices or the backup server to the Data Domain platform—reducing the amount of data moved by up to 99 percent. This further reduces not only the need and cost of protection storage, but also backup time. When DD Boost is deployed with the Data Protection Software at the Azure Stack client, it sends only the de-duplicated unique data directly to protection storage, bypassing the need for a media server. The result is a reduction in infrastructure footprint required, therefore fewer resources to purchase and lower egress/ingress costs to other Azure resources, not to mention a faster backup due to fewer hops in the data path. For 8 of 12 Data Domain customers that ESG Research analyzed, up to 98 percent of all backup jobs were completed in under an hour.*Data Domain Data Invulnerability Architecture.While this technology doesn’t improve performance or reduce costs, it ensures that mission-critical Azure Stack data is always recoverable. One way that Data Domain ensures this is via inline write and read verification, which safeguards data integrity during ingest and retrieval. In addition, self-healing and on-going fault detection further protects data’s recoverability during its Data Domain lifecycle.With Dell EMC Cloud for Microsoft Azure Stack, you can bring the power of Azure into your data center, behind your firewall – engineered, tested, delivered, serviced and supported by Dell EMC. Whether your applications are on-prem or in the cloud today, data protection needs to be an important part of any strategy. Dell EMC’s data protection capabilities address both traditional and emerging cloud strategies.For customers looking to leverage Microsoft Azure solutions, Dell EMC is certified to deploy Avamar and NetWorker virtual editions outside of the Azure Stack protecting VM’s with guest-level protection to provide customers with the Azure Stack Marketplace Support.It is important to note that Dell EMC requires an in-tenant client to facilitate backing up tenant workloads today.) Why choose Dell EMC Data Protection for Azure Stack? Dell EMC data protection products are already proven in non-cloud environments and bring a market reliability, scale and performance to Azure Stack customers. Dell EMC is a trusted partner across the data protection portfolio. For database and filesystem protection, customers only need to install and configure the appropriate software client/agent to their virtual machines and manage their backups and recoveries the same way that they currently protect physical servers in their data centers. On the horizon, Avamar Virtual Edition and NetWorker Virtual Edition will also be available to allow customers the option of protecting Azure Stack assets using data protection that is also running within Azure Stack.As businesses embrace the benefits of Azure Stack, the Dell EMC data protection portfolio provides a trusted foundation for businesses to transform IT through the creation of a hybrid cloud, as well as transform their business through the creation of cloud-native applications and big data solutions.Understanding Microsoft Azure Stack Azure Stack is an extension of Azure, bringing the agility and fast-paced innovation of cloud computing to on-premises environments. Only Azure Stack lets you deliver Azure services from your organization’s datacenter, while balancing the right amount of flexibility and control—for truly-consistent hybrid cloud deployments.OTHER RESOURCESDell EMC Azure Stack: https://www.dellemc.com/en-us/solutions/cloud/microsoft-azure-stack.htm Explore More: Dell EMC Cloud for Microsoft Azure Stack*ESG White Paper: ESG Whitepaper sponsored by Dell EMC, “The Economic Value of Data Domain,” May 2017. 8 out of 12 customers analyzed achieved backup in less than one hour. Actual results will vary.Dell EMC Cloud for Microsoft Azure Stack Gets Next Gen Server Boost
Dell Technologies, VMware and Telenor have collaborated closely to fully leverage and realize the potential and promise of 5G, Edge and machine learning innovation applied to a TeleHealth use case as part of a broader commitment to doing research and development going forward.Health Service Administrators are looking at how advanced technology can play an enabling role in transforming healthcare delivery. Enabled by 5G and integrated Multi-Access Edge Computing (MEC), the design of better-connected and coordinated IT services will dramatically advance urgent healthcare delivery. The development of these new models will create improved experiences and life enhancing outcomes for patients in their care.The vision of our Proof-of-Concept (PoC), which we’ll be showing at our booth at MWC 2019 in Barcelona, was to examine the scenario of possible stroke victims at a remote locations. We show how the continuous collection and streaming of patient data is enabled from initial contact through to arrival at the destination hospital emergency department.We demonstrate that the seamless composition of services will provide a secure, reliable, low-latency mobile HD video link from a remote ambulance to a hospital and an Edge assisted remote stroke assessment application that shortens the time to assess and provide urgent care to potential stroke victims (and save lives). The availability of remote, real-time HD video streaming from paramedics to the hospital emergency room or medical specialists enables more intelligent and timely decision making and improves the probability of better patient outcomes.Additionally, new health assessment innovations, such as the telestroke application, are being developed to enable faster remote diagnosis; combining machine learning and real-time edge computing. This mission critical use case scenario (figure below) was approached from and end-to-end perspective, focusing on several key new 5G network capabilities that include:E2E Networking Slicing – leveraging Open Source Management & Orchestration (OSM) capability to provide virtual multi-layer slice from core, SDN/NFV, to radio access to enforce QoE requirements on common infrastructureMulti-Access Edge Computing (MEC)– common infrastructure platform, based on Dell Technologies hardware and VMware Integrated Openstack (VIO) and software-defined networking (NSX), supporting the hosting of virtual applications for low latency, as well as hardware acceleration and GPU for ML/Analytics capabilitiesHardware Acceleration – GPU offload of x86 CPU for video, image processing, ML/Analytics and real-time processingIntelligent workload placement – as the ambulance moves between different edge sites toward the hospital, OSM instantiates, monitors and scales out critical VNFs to the appropriate MEC siteAutomation and Programmability – OSM provides cross-plane/domain orchestration, FCAPS management in concert with vRealize Operations Manager and QoE slice management 5G, IoT, Network Slicing and Multi-Access Edge Computing are transformative to the Telecom architecture, ecosystem and partnerships, as well as operating model. Dell Technologies is at the forefront of 5G innovation and actively participating in EU 5G research projects, relevant standards & open source consortia and with 5G/Edge use case development with the leading Telecom Service Providers. We are leveraging this innovation work to solve real problems as well as increase our understanding of vertical solution requirements and business drivers to build our 5G partner ecosystem. Dell Technologies provides common validated 4G and 5G NFVi and edge computing solutions that are open and integrated with both commercial and open source partners.To learn more about the TeleHealth collaboration effort or all our NFV, Edge, 5G and IoT solutions come visit the Dell Technologies booth at MWC 2019 in Barcelona, Hall 3 – Stand 3M11, from Feb 25-28, or watch the video series.We would like to thank our partners on the EU Horizon 2020 program, SliceNet and Telenor, for their help developing this PoC. The SliceNet project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 761913.
With MWC 2019 right around the corner, 5G promises richer experiences for consumers and opportunities for service providers. But this is no simple evolution from 4G. This is a massive transformation.5G demands new distributed architectures that leverage software-defined infrastructure to automate the delivery of mobile services, and new embedded intelligence and analytics-driven telemetry to ensure consistent service levels. This new dynamic changes the traditional Telco vendor ecosystem of bespoke single-purpose devices and software, and opens it to new entrants with multi-purpose, standards-based offers leveraging common APIs.This once-in-a-decade transformation of the mobile platform will manifest itself along 4 main vectors:Digital Transformation – monetization of data analytics and artificial intelligence. Whether we talk about autonomous vehicles, augmented reality or IoT, at the core of just about every 5G business case is the need to transport, aggregate, process and react to massive amounts of data collected from an exploding number of smart “things.” For the first time in any of the “G” evolutions, we are witnessing a use-case driven network upgrade focused on digital services as the foundation of future revenues.IT Transformation – modernizing infrastructure to support automated processes. New use-cases are great, but they cannot be realized by doing what has always been done – building proprietary, vertically integrated networks that are opaque to services and infrastructure operations. It’s a multi-cloud future where C-RAN, edge, telco and public clouds are fully integrated, and real-time telemetry and automation help build and manage real-time services.Workforce Transformation – providing the tools and skills for operations to take advantage of the new infrastructure capabilities. IT Transformation is only as successful as the workforce capable of taking advantage of it. Cloud native, horizontal 5G networks won’t be operated in the same way that proprietary, vertically integrated 4G networks were. Mobility and networking experience are no longer sufficient – operational “excellence” will integrate this domain knowledge with data science and computer science skill sets.Security Transformation – integrating natively into all services and processes, rather than as an overlay. These foundational shifts to the services, infrastructure and workforce cannot be done in isolation of a security transformation. Networking and security are evolving in tandem, and 5G is a beneficiary of that evolution. We see security as both an opportunity to monetize the 5G network with unique Security-as-a-Service offerings but also as a foundational overlay to all services, infrastructure and processes.Together, these new requirements include solutions that are:Open, standards based“Cloudified” – multi-cloudScalableSecureVirtualizedAutomatedEdge-enabledAll of these are well known and well understood concepts in the data center, where Dell Technologies has helped customers thrive for decades. From device to edge to core to cloud, capabilities must be omnipresent and ubiquitous – and Dell Technologies plays a critical role in all of those, with the most comprehensive portfolio of validated and customized solutions designed to support 5G and edge deployments.Partnering with Dell Technologies helps reduce time to market with next-generation platforms, consulting resources and global execution. This trusted partnership allows Dell Technologies to meet any customer’s needs, regardless of business evolution.The flexibility of deploy-anywhere edge computing brings enhanced optimization to IoT, EUC and SP edge through our purpose-built infrastructure solutions and tightly coupled software from VMware, giving increased visibility, manageability, deployment flexibility and global services and support to these data-rich environments.I encourage you to spend some time at the Dell Technologies and VMware booth – in Hall 3, Stand 3M11. Allow me to provide a quick overview of some of the opportunities for you.Participate in product demos and ask our expertsCheck out our VR drone demo and learn more about 50+ uses cases including network slicing, Project Dimension, virtual cloud networking, hardware acceleration, unified endpoint management, Telehealth and much moreDiscover the power of Dell and VMware Come and view our Super Screen which will display our validated suite of solutions that can accelerate your journey to 5G, while preparing you for edge-to-edge success in the futureInteract with our edge and IoT touchscreenExperience cities of the future in a 5G world as well as an IoT industry showcase where we’ll display solutions in key verticalsVisit our 5G Immersion RoomExperience 5G in action through product simulations that highlight incredible opportunities for your businessTour our Micro Modular Data CenterExperience the capabilities and deployment flexibility of these self-contained units designed with the security, environmental and performance capabilities to be placed in remote locations where real estate space is limitedCome see all of these edge, 5G and IoT capabilities in action and meet our experts at the MWC 2019 Barcelona Hall 3 Stand 3M11; or go here for an update on associated webinars, video series or to schedule an-in person meeting.We look forward to seeing you in Barcelona. Safe travels.
In today’s digital world, businesses are built on data. That data has value not only to the organizations that house it, but also to external and internal threats. In order to ensure that your business has the digital services it needs, you need trusted infrastructure. Research by ESG and Dell shows the return on investment, as well as risk reduction, that is obtained from running a trusted data center is significant. On the spectrum of Leader and Laggard IT organizations, 92 percent of leaders surveyed reported that investments in infrastructure technologies to maximize uptime and availability and minimize security risk have met or exceeded ROI forecasts.Mid-market organizations must quickly respond to changing business needs in order to get ahead of the competition when everyone is ‘always-on.’ How do companies maintain trusted data centers and compete to become the enterprises of tomorrow while also managing IT budgets very closely? The answer is in efficient solutions that enable businesses to do more with less and securely extend the value of their investments. Brands must also have the confidence and peace of mind that vital business data is protected and recoverable no matter where it resides.Why does leading in data center trust matter? The cost of being less secure is high. Surveyed firms estimate that their average hourly downtime due to security breaches cost is $30,000 to $38,000. Notably, 38 percent of line of business executives have serious concerns about IT’s security capabilities and controls. Additionally, security professionals are in high demand and hard to find.ESG has identified three best practices among trusted data center leaders, and how Dell Technologies solutions and PowerEdge servers help organizations achieve and support those best practices in an ‘always-on’ landscape.1. Prioritize market-leading BIOS/firmware security.Data flows in and out of servers faster than ever before, and it is crucial for organizations to protect this data. That’s why organizations need to ensure BIOS and firmware are up to date. Organizations that prioritize BIOS/firmware security are 2x more likely to say that their security technology delivers higher than expected ROI.And, it’s not just about BIOS improvements: it’s all the other features and functionality that helps ensure that technology continues to get better and more secure as you go along. Trusted data centers have increased functionality for security.2. Refresh server infrastructure frequently.ESG highlights the role hardware plays in the trusted data center and the benefits leaders who refresh their server infrastructure experience. For example, optimized infrastructure results in 41 percent reduction in downtime costs in a modern server environment. Organizations with modern server environments (servers that are less than 3 years old) save as much as $14.3M/year in avoided downtime versus organizations with legacy servers.That’s because old hardware can’t take on new threats. In the mid-market space, companies may not be aware of new threats that are emerging or may not think they’re big enough to be considered a target. The reality is they could be, and it is even more important to make sure that data center hardware is secure and up to date.Unfortunately, IT hardware doesn’t get better with time; the older it gets, the less reliable it becomes. It’s going to cost more to monitor and maintain older servers in head count, parts and resources needed to get those servers back up and running versus purchasing optimized hardware on a refresh cycle. It makes sense to refresh more quickly to make sure you’re getting all the latest technology. With more advanced systems, if you do experience issues, you have more failover capabilities.3. Automate server management.Highly automated organizations are 30 percent more likely to delivery highly reliable application and system uptime and reduce data loss events by 71 percent. Leaders are seeing tremendous value from automating their server management –they reported saving an average of 10.5 person-hours per week.How are Dell EMC PowerEdge Servers built to support trusted data centers? With so much at stake, security is one of the primary values that Dell builds into every single product we deliver. Our PowerEdge servers are engineered with security in mind for optimized infrastructure that lays the foundation to implement best practices.Security is an evolving landscape and so is server management; “secure today” does not guarantee secure tomorrow. Fortunately, PowerEdge servers provide security that is built-in, not bolted on, and all models leverage the same management capabilities. Automation is essential and Dell is continually expanding remediation and threat detection through our OpenManage Application, including new capabilities around power management for reducing overall power consumption.Dell Technologies infrastructure enables organizations to easily manage IT environments to solve their biggest challenges. To learn more about how Dell EMC PowerEdge servers are designed with the reliability, simplicity and security features needed to implement the above best practices, watch the ESG Trusted Data Center and Server Infrastructure Webinar (registration required).
Personalized recommendations have changed the way brands reach their customers effectively. Taboola is the world’s largest discovery platform, delivering content recommendations to billions of consumers on many of the world’s top sites. We recently sat down with Ariel Pisetzky, Vice President of IT and Cyber, to learn how Taboola uses AI to successfully drive their business. Taboola provides the right recommendation 30 billion times daily across four billion web pages, processing up to 150,000 requests per second.A few years ago, Mr. Pisetzky and his team required a modernized infrastructure to support Taboola’s growth and improve the experience of their customers and advertisers.Delivering Taboola’s services requires extraordinary computing power and simplified management to attain the maximum performance to serve clients and users worldwide. The company turned to AI, because it would allow them to dynamically respond to inquiries using inferencing and deep learning capabilities. Success depended on being able to keep insights flowing with adaptable AI systems, innovative architecture and intuitive systems management.The engine driving their AI solution consists of two components: front-end artificial intelligence (AI) for inferencing based on PowerEdge modular servers with Intel® Xeon® Scalable processors to process and deliver the real-time content recommendations. The back-end servers that host cutting-edge deep learning models are continually trained using sophisticated neural networks to infer user preferences.By using PowerEdge modular servers, the IT team at Taboola can meet rapidly changing demands and enjoy the versatility and simplicity necessary to support a building block approach. The team is able to cost-effectively use the same servers interchangeably as AI inferencing nodes, database servers or storage nodes with very simple configuration changes. Each request coming into a front-end data center runs the AI-driven inferencing algorithms in a unique, ultra-fast process that delivers a relevant recommendation within 50 milliseconds.Taboola took full advantage of the built-in performance acceleration of 2nd Gen Intel Xeon Scalable processors—together with the highly optimized Intel Math Kernel Library for Deep Neural Networking (Intel MKL-DNN). Taboola was able to initially enhance its performance by a factor of 2.5x or more with their modernized infrastructure. Then, gaining the efficiencies of Kubernetes within the software layer—including the operating system, TCP/IP stack, load balancing and more— Mr. Pisetzky’s team went much further.“With PowerEdge servers and Intel Xeon Scalable processors, we now get up to six times the performance on our AI-based inferencing compared to when we started,” states Pisetzky. “This helps reduce our costs, and we believe there’s a lot more to be gained over time.”For the back-end data centers running deep learning-based models to accurately and reliably train the Taboola models, the Dell EMC PowerEdge R740xd servers with their lightning-fast accelerators were the answer.“Training is much different from the real-time inferencing we do on the front end. The demands aren’t in terms of response times, but rather the time it takes to process large volumes of data. PowerEdge R740xd servers provide the performance to access our massive data to train our models and push them back to our front-end data center for inferencing. We’re using Vertica, Cassandra and MySQL databases across a variety of nodes,” states Mr. Pisetsky.Today, the company takes a more holistic view of its data centers as high-performance computing (HPC) clusters, which are able to process an enormous number of requests per second. Rather than just add servers or racks, Taboola looks at everything as a single HPC machine, and reshuffles servers to achieve significant performance improvements and greater cost efficiencies.The next step in building Taboola’s solution was determining the most efficient and cost effective way to manage this large global footprint with a small IT team of 12 Site Reliability Engineers across nine global data centers. The team turned to iDRAC, which allows them to deploy servers with the touch of a button. They can easily update servers across their data centers and ensure the BIOS and firmware settings are identical across all servers.The results Taboola has delivered to their users are amazing. Today, different people can go to the same page and receive personalized recommendations relevant to them, all without Taboola knowing who you are. AI has provided Taboola with the ability to take their business to the next level with impressive results. They can now provide personalized services, better user experiences and better results for their end users, advertisers and publishers.Learn more about Taboola’s AI deploymentDiscover how iDRAC enables Taboola to manage their servers remotelyWatch the webinar: Making AI Real with Taboola and PRGX (registration required)
The future of 5G + Edge, you might say, is in the cards. Server accelerator cards—graphical processing units (GPUs), field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs)—have an important role to play in reducing latency and improving bandwidth at the edge of the 5G network. With the volume of data generated by IoT and far-edge devices exploding, telcos must be prepared to accelerate the edge. Implementing server accelerator cards, as part of 5G new wave spectrum strategy, will help telcos deliver more data, faster.Making the Far Edge Faster Let’s consider the use case of a disaggregated RAN at the far edge using an Open RAN (O-RAN) with a 7-2x functional spit option (as defined in 3GPP). To serve the O-RAN distributed unit (O-DU) portion of the RAN, an accelerator card must satisfy three requirements:A network interface card (NIC) with multiple ports of 10GbE, 25GbE, or even 100GbE;Enhanced timing via G.8275.1 telecom grandmaster (T-GM) clock featuring Synchronous Ethernet (SyncE), IEEE 1588 precision time protocol (PTP), and physical layer (PHY) timestamping;Network processing acceleration using FPGA, GPU, or ASIC.Until we have a card that can satisfy these three requirements, in the short-term, multiple server plug-in cards can be used with memory sharing or a pipeline between them. Each card could handle one or more of the above requirements, or the telco might choose to integrate some or all of these cards on the motherboard.The diagram below illustrates how DU processing with a portion of the processing offloaded inline to accelerator cards might look. (Image courtesy of the O-RAN Alliance).For the central unit (CU) portion of the RAN, a multi-core smart NIC (sNIC) card should suffice for accelerating control plane operations such as ciphering and encryption.As you look at other RAN split configuration options, the requirements for acceleration will change but roughly follow the DU and CU needs described above. When weighing accelerator card and server options, it’s best to consider it from the business point of view rather than the technology point of view. For example, which choice gives you the best total cost-of-ownership for workloads in a vRAN use case (e.g., microcells, picocells) and how many radio sectors will it cover? The answer could be a base station multi-input multi-output (BS-MIMO) method using 2 or 4 transmitter (Tx) and receiver (Rx) devices or 64 Tx/Rx devices using millimeter wave mmMIMO. The flexibility of software vs. hardware optimization needs to be reviewed carefully for decision making.As further aggregation takes place along the data path from the edge toward the network core, we start seeing more common data center models for acceleration with the need for storage optimization, caching, and so on. Many advanced NICs like the ones that are used successfully in enterprise use cases are applicable here.Managing InfrastructureManagement and manageability of an advanced acceleration infrastructure are critical factors in the adoption of 5G edge solutions. Telcos expect to manage the complete hardware pipeline from the far edge (e.g., radio towers) to the network core data center using a single management interface for all control operations. An abstraction layer above the hardware is needed to take full advantage of disaggregation.For servers, this requires out-of-band management through DMTF’s Redfish for all server components, including sNICs and accelerators of every kind. Because accelerators are plugged into a server, these too must be capable of being managed using a single, consistent interface. To that end, Dell Technologies has designed its integrated Dell Remote Access Controller (iDRAC) solution to manage servers including all accelerators and sNIC devices within them.DMTF standards, such as the Network Controller Sideband Interface (NC-SI) and Platform Level Data Model (PLDM) protocols, are used internally to enable the iDRAC controller to access, configure, and manage these advanced components. The details of how that works are not visible to users, as their interactions with the server are done through the Redfish interface.For hardware management tooling, some telcos utilize the standalone Ironic project, Bifrost, or have hardware management integrated within the platform (e.g., OpenStack). For Kubernetes bare-metal deployments, it is expected that Metal3 will be used for hardware management and integrated via the Kubernetes Cluster API. Central to this approach is the ability to use the Redfish protocol for out-of-band management for servers, networking gear, and even dedicated storage.Another aspect that needs to be considered is the operational usage of accelerators or sNICs. We expect that hardware management will prepare these accelerators so they’re ready for use in a server. As part of that management process, telcos need to consider how users will program those accelerators to handle server workloads, particularly with a platform like Kubernetes or OpenStack. In order to prepare accelerators to handle application workloads, two criteria must be met:A data pipeline must exist between the CPU and the accelerator card;The application code must be executed on the card itself.Open Network Foundation’s Stratum is commonly used for accelerators or sNIC stream building. The OneAPI model can be used for application development. Vendor-specific device programming and management tools can also be used for both device programming and stream management of accelerators and sNICs.A follow-on consideration is the method used to achieve platform integration. In the Kubernetes environment, CustomResourceDefinition (CDR) APIs can be used to instruct the operator framework to prepare a smart device for its intended use. Similarly, for OpenStack, the device-specific driver in OpenStack’s Cyborg can be used.As you can see, there is a lot be considered when accelerating the edge for 5G services. Fortunately, Dell has taken all these things into consideration when building its edge servers and solutions. Think of it as a built-in “turbo” button that you can activate to give your network a competitive edge.