IDC Analyst to VCE Technologist

first_imgThree months ago I joined VCE after working as a research director and industry analyst at IDC, leading IDC Australia’s research teams. For six years as an IDC analyst, I had the opportunity to peer inside leading tech vendors (including VCE), listen to their strategic direction and challenge their rationale behind various go-to-market strategies. After several years of research it became clear that the IT industry is set for what I termed ‘multidimentional transformation,’ where change occurs beyond the technology sphere and into the business itself.Each year at IDC I would conduct research into the C-suite including the function of the CIO and noticed that over the past few years, the importance of the infrastructure stack became more critical. Prior to the global financial crisis, improving or modernizing IT infrastructure wasn’t in their top 10 priorities. However, as the financial crisis expanded, CIOs appeared to sweat their assets for longer. This was validated by corroborating research that showed prolonged PC and server lifecycles. The significance of this is that as infrastructure ages, it becomes less reliable and more expensive to run so it made sense that infrastructure became a higher priority. However, as the financial crisis passed, the CIO’s focus on infrastructure continued to increase – and fast-forwarding to today, improving or modernizing IT infrastructure is now a CIO’s No. 1 priority. If the financial crisis didn’t explain the increase, what did?One correlation is the influence of the line of business in IT decisions, which rose with the importance of IT infrastructure. It seems that the line of business was making ever increasingly stringent demands to IT and the CIO, which in turn exerted more pressure on the infrastructure layer.Wrapped around all these new demands from the line of business came the new watchword: velocity. The market demanded rapid application implementation and streamlined automation. Current infrastructure constrained velocity, so CIOs began to focus on the infrastructure layer to quickly provide new business solutions.“Looking at IT today, it’s fair to say that at a macro level, infrastructure has gone to hell in a handbasket. To grasp how this has occurred, it’s helpful to look to the past to see how we managed infrastructure in the mid-90s.ShareIn the 90s, the cost of management (including staffing) was a percentage of what we spent on our server hardware. Fast-forward to today, the scenario has flipped. Management of the server fleet now costs a multiple of what we spend on acquiring it – management costs are rampant and spiraling out of control.So what happened? We need to look beyond the physical installed base of servers towards the logical. It’s ironic that the technology that was meant to reduce costs and simplify infrastructure, was actually one of the catalysts behind the crush we are now experiencing: server virtualization.The impact of virtualisation was that we started to buy fewer servers. This fundamental shift saw a tapering of overall server unit shipments, but this was off-set by a rapidly growing number of logical servers. As we deployed more logical servers, the cost of management soared and the problem was that we continued to manage our logical servers the same way the managed physical servers; we didn’t change our IT operations to match the new capability.Today we spend $8 on management for every $1 we spend on the server hardware itself1. What’s even more disturbing is that the data for the server market can be replicated for the storage and networking markets too. Something needs to change.It should be little wonder then that the market for true converged infrastructure (CI) is booming as CI solves many critical management issues that reference architectures and traditional approaches do not. While the general server market remains flat, IDC market research showed integrated infrastructure and platforms sales increased 50% year over year 2. And within this growing market segment, it’s VCE that leads (according to both Gartner and IDC), with Gartner’s latest report showing VCE as leading with over 50% market share.3The strategy for most IT converged infrastructure vendors is to try and save their clients 10c or even 15c from the $1 they spend on acquiring hardware. VCE on the other hand targets the other side of the equation (where the meaningful savings are made) and aims to save clients $4 instead of 10 or 15 cents. In fact the saving is 68% according to an IDC study into VCE customers, which is actually $5.44 saved from the $8 spent.4The benefits to the business don’t start and stop with increased efficiency and decreased costs – two of the CFO’s favorite things. The lack of velocity is one of the leading reasons that the lines of business bypass IT altogether. Research into VCE deployments by IDC has shown measurable reductions in the time to stand up infrastructure, from 160 days to 45 days. Additionally, research has shown a 79% reduction in the internal IT staff time to configure, test and deploy the infrastructure.4As an IT analyst, it was clear that converged infrastructure is the future and VCE is leading the expanding market. But it is the way that VCE approaches the market that truly impressed me. VCE simultaneously solves critical technical and business challenges in such a different way from competitors that the value proposition is unique.It’s not often that a company strategy and offering intersects so perfectly with an expanding marketplace. Joining VCE and being part of the transformation wave that is sweeping the industry was enough to lure me away from the world of the industry analysis.1: IDC, Virtualization And Multicore Innovations Disrupt The Worldwide Server Market, Doc #206035, March 20072: IDC Worldwide Integrated Infrastructure & Platforms Tracker, October 2, 20133: Gartner, Market Share Analysis: Data Center Hardware Integrated Systems, December 12, 20134: IDC Whitepaper: Converging the Datacenter Infrastructure: Why, How, So What?, DOC #234553 May 2012last_img read more

NX NAS appliances upgrade to 13th Generation hardware

first_imgWe have some exciting news for those interested in NAS (Network Attached Storage), which includes two products in our Microsoft Windows Storage Server 2012R2 based PowerVault NX lineup. We are upgrading our NX NAS appliances to allow our Windows NAS customers to take advantage of the improved performance, energy efficiency and manageability options of our powerful new line of PowerEdge 13th Generation servers.The NX3200 and NX3300 NAS appliances currently based on PowerEdge 12G server technology are now being upgraded to 13G hardware. They will inherit all the efficiency and performance features of the new server platforms, including the benefits of the new Haswell microarchitecture.Best of all, this time around we went a step further than just changing nuts and bolts under the hood. We added a cool new feature called RASR– Rapid Appliance Self Recovery Tool. RASR will allow the end user to restore the NAS appliance to its factory shipping state. RASR is using a bare metal restore process, where the operating system drives are rebuilt to the exact default factory image. This is especially useful in test environments where machines are re-imaged often, or if you are notorious for misplacing your system restore DVD.How else does this benefit our customers?More CPU cores: The NX3230 entry configuration and the NX3330 Optimum trim level will both move from four to six cores. For larger configurations, both the NX3230 and NX3330 will move from six to eight cores, providing additional CPU cycles for more demanding applications.PERC: The Dell PowerEdge RAID Controllers shipping with the NX3230 and NX3330 will now support 12G SAS and the new generation backplanes. In addition, the H730, which is part of our NX3230 default configuration, will double the cache from 512MB to 1GB to accelerate performance.Chassis Flexibility:  We have now moved the NX3330 (gateway appliance) from a 2PCI to a 3PCI slot chassis which will provide our customers freedom to add additional IO cards.Memory: The new architecture for both the NX3230 and NX3330 will allow for higher memory clock speeds.  The two units are shipping with 1600 MHz RAM configurations, compared to 1333MHz of the previous generation. In addition, we added a 64GB RAM option for the NX3330 Performance configuration, answering customer demand for a higher performing solution, especially in large home-share environments.Keep in mind that Windows Storage Server based NAS can be an extremely efficient, fast and feature rich platform when it comes to SMB file sharing. Especially if you have Microsoft admin expertise in-house, there will be a zero learning curve with Windows- based NAS products and a seamless integration with AD (Active Directory) or Microsoft based systems management.Finally, we have a new name for this portfolio of NX NAS appliances, which were previously known as the PowerVault series. Going forward, we will refer to them as Dell Storage NX NAS series of products as part of an update across our portfolio that is moving under a common “Dell Storage” naming. So, go online to check out the new Dell Storage platforms at learn more and stay updated, follow @Dell_Storage on Twitter.last_img read more

Beauty & The Beast

first_imgAfter watching a rerun of the EMC World opening session I felt compelled to underscore the excitement we’re seeing from our customers regarding “The Beast” aka XtremIO 4.0!Of course bigger clusters, bigger capacities and bigger IOPS numbers tend to get all the fanfare at a launch event but, perhaps surprisingly, these capabilities are not the sole reason customers select XtremIO for their transactional workloads.Deep within “The Beast” is something of inherent beauty – an architecture that can start small and grow to over a petabyte. An architecture that scales out linearly and delivers consistent, predictable sub-millisecond latency. An architecture that enables data services to be inline, all of the time. And an architecture that enables incredible simplicity and ease of use.None of this beauty was created just for “The Beast”. But it is because of this beauty that we were able to create “The Beast”.But is this beauty only skin deep?Let’s recount recent history. When we first announced XtremIO, just eighteen months ago, much of the fanfare in the flash segment was around upstarts such as Violin Memory and FusionIO. Neither company was promoting an “array” as the best use for flash in the enterprise and their new model for storage promised the inevitable demise of all established storage vendors.As we sit today, recently confirmed by Gartner, EMC market share for All Flash Arrays now exceeds EMC market share for general purpose storage arrays. FusionIO is gone and Violin is on the ropes, ironically while trying to create an array. Sure, there are new pretenders – their pitch sounding eerily familiar to those of yesterday – but here at EMC we’re remaining incredibly focused on delivering against our roadmap and driving customer success.And we’re not done with flash. Not by a long way. Later this year we’ll release DSSD to market. We believe DSSD will once again change the game for flash in the data center. But this time for next generation in-memory database workloads and high performance big data analytics. There’s much beauty in DSSD too, but that’s another story.last_img read more

Will the public cloud kill agile development?

first_imgContrary to popular belief, the public cloud will not necessarily make life easier for IT. In fact, technology professionals, particularly those in relatively new fields like DevOps, are at serious risk of becoming irrelevant if they can’t or won’t understand the affordances of cloud infrastructure.Trevor Pott nailed it in his recent article about the rise of DevOps and SecOps when he said “developers become more paranoid…with operations out of the way and infrastructure provisionable through APIs there is no one to blame for delays but themselves.” The issue is that DevOps teams are made up primarily of developers who’ve learnt to manage operations along the way. And Pott (understandably) doesn’t reach the point that in the case of agile development, the medium really is the message, or at least inexorably intertwined with it.Without at least an appreciation for the technology infrastructure that supports agile – or worse, rigidly defining it for one explicit purpose or another – DevOps will not be able to provide the iterative, responsive, continuous delivery that is its raison d’être. In other words, it will fail. But this infrastructure must also be simple and malleable enough to use that it doesn’t become a time-sink for the former developers that dominate the school of DevOps.A question concerning (cloud) technologyOstensibly, the public cloud is the most malleable of technology infrastructures, an acknowledgement of how “without their code, few organisations will be competitive,” as Pott puts it. But is it? Public clouds are not always the most cost-efficient or easy to maintain and scale. Nor are they, especially in the case of SaaS, open to customisation and variation of their workloads. This is not a bad thing in itself. But it poses some particularly thorny issues for DevOps.The main issue is that DevOps exists as what one of my friends calls a response to the high modernism of technology – the notion that software ought to be developed upon planning principles so fine and rigid as to obviate the very role of the developer themselves. In his essay The Question Concerning Technology, Heidegger makes a similar point with his “standing-reserve”, the ideology which defines any technology as built for, and only ever completing, a single and immutable purpose. The alternative – and the motivation for DevOps – is to embrace potential rather than stricture, whereby any particular object is open to interpretation and alteration based on whatever circumstances called for. Heidegger calls this spirit of technology techne. DevOps calls it agile.The public cloud, governed as it is by third-party forces, is increasingly an example of a standing reserve. Anything “as a service” essentially sits waiting to be called on for one specific purpose, whether hosting particular workloads or providing particular applications. The affordances available to DevOps – to make constant minute changes to how their products and services function – are increasingly restricted, whether by cost or technical complexity or just standard access denial. In other words, the public cloud offers simplicity only at the sacrifice of control. And without control over the infrastructural medium, the DevOps messages of responsive and agile will become practically irrelevant.The techne-cal solutionOf course, DevOps itself exists to merge the agile mindset of “dev” with the functional control of “ops”. But, as Pott points out, operations has traditionally worked under an “us vs. them” mentality in restricting technology resources for only the most well-defined of purposes. Operations is the high priest of technology as standing-reserve, if you will. So it’s unlikely that DevOps will find much help there.What DevOps really needs is a medium where agile development doesn’t generate frictions for coders that disrupt continuous delivery, but which also provides an infinite range of affordances for potential projects and services. A techne platform, in other words. Private cloud infrastructure is the obvious choice – but it typically goes too far the other way, creating even more frictions by dint of technical complexity as a result of its piecemeal or siloed construction. What if the private cloud came pre-assembled, with all systems integrated from the very beginning? This is the principle behind converged infrastructure.With converged infrastructure, DevOps can fully understand the medium in which it’s working, since all component systems are already integrated and accounted for. Like a potter with clay, that immediate sense for the technological medium is important because it lets the craftsperson get on with the actual business of building something – whether a vase or an enterprise application – in the knowledge that the medium will respond in a more or less predictable way. Unlike the medium of public cloud, converged infrastructure also allows full control over how its affordances get used, reused, and recycled.The old boundaries between traditional packaged applications and mobile-first, web-based apps no longer apply: they can run securely on the same infrastructure without conflict or incompatibility. Once again, this allows DevOps to delve into rapid iteration, production, and destruction without questioning the baseline integrity of their infrastructure. And to top things off, the long-term costs of running enterprise applications on converged infrastructure are typically lower than in the public cloud – negating one of the biggest reasons for ceding infrastructural control in the first place.For business managers, the question after all this is probably “so what?” The answer is that waterfall and other prescriptive, high-modernism ideologies about software are no longer functional – if they ever were. Now, speed and responsiveness are kings: if you can cut time-to-market from 25 days to 5 for a new service, you can beat the competition, at least for the next few months. But the curse, and magic, of continuous delivery is that it never stops improving. As Pott writes, the tribes within DevOps need to quickly find common ground to keep delivering those results for their businesses. A technological medium like converged infrastructure, which can give developers myriad affordances to iterate and test while smoothing out the frictions of operational control, will be a necessary bridge between them.Image: “Waterfall and Rocks“, Mark Engelbrechtlast_img read more

The Importance of Robots, VR and IoT to Channel Partners in 2018

first_imgLast month we celebrated one year of the new and improved Dell EMC channel partner program. And what a year it was! We learned a lot during this time and I’m pleased to say we have listened to the feedback from our channel partners and customers and actioned it.We’ve kicked off 2018 on a high by announcing improvements that will continue to increase the benefits for our valued partners. With a new rebate structure and a competitive MDF strategy, we have shown our intention to always to reflect on our offering and continue to make the program simple, predictable and profitable for our partners. Locally, we hosted our first Partner Advisory Board of the year; it’s a great event where we drive truly meaningful conversations that allow us to continually invest and improve the program. But these ongoing changes are just one part of the strategy that helps our partners remain successful.At the end of 2017, Dell Technologies predicted 2018 to be the year that human-machine relationships reach new levels. So, what does this mean for our channel partners? Emerging technologies like artificial intelligence (AI), augmented reality (AR) and virtual reality (VR) will dominate the conversation. Just this week, an Australian school revealed it was using a robot to teach alongside a teacher. The AI capabilities of the robot provide a two-way experience that goes above and beyond a child using a mobile device. The potential for AI to disrupt all industries is here and we are about to jump in head first. It’s important to ensure that your business is not only aware of what can be achieved using the technology but also have the technical understanding of the infrastructure changes needed to create a modern data centre.Advances in the Internet of Things (IoT) and cloud computing are progressing faster than we anticipated. This extra processing and analytical power is already changing the way we live with more connected homes and cars, and greater consumer expectations in almost every industry.One of my favourite customer stories of last year is about Tassel and our partner Intuit Technologies. Using IoT to farm more data on how their salmon pens were performing seemed like a straightforward solution. By predicting multiple variables, the team was able to produce better outcomes for the business. But to run the IoT, Tassel needed to upgrade its IT systems, which is where Dell EMC came into the mix. We provided the hyper-converged infrastructure required to store, manage and automate all the extra data the IoT element was producing, allowing for a real-time decision-making process. This journey had two parts to success and we encourage our partners to become experts in both.As we continue to see these incredible use-cases and explore new ways of working with technology, our partners need to remain ahead of the curve. Immerse yourself in the possibilities that can be achieved so, when the time comes, you can help to bring these incredible use-cases to life.With Dell Technologies World and our annual Global Partner Summit taking place next month in Las Vegas, we encourage all our partners, resellers and customers to join us. In an action-packed agenda, we’ll explore the latest technology trends with our experts, hold workshops and training on our full product portfolio, and share great stories from our customers. It promises to be an inspiring week with lots of insightful discussions. What are you waiting for? Find out more information and register for the event here.last_img read more

Network Automation with Ansible

first_imgOS10 and Automation solutions overviewThis era of digital transformation aims at reducing operational costs for IT infrastructure, as a result of which converged IT processes are becoming increasingly important. DevOps is an operational model that helps businesses achieve agility, efficiency and as of late networks are also becoming part of this model.Network automation is a crucial component in this model, as networks are expected to act, react and perform reliably based on the changing business needs.OS10 is a next-gen Linux based network operating system that provides a rich set of programmatic interfaces to configure and maintain network devicesThis ability of OS10 and integrations with tools like Ansible makes it a prime choice to operate well in DevOps environments. Ansible Integrations provide the ability to treat network equipment as software components thus reducing the complexity of automating configuration and maintaining the networks.Dell EMC networking and Ansible AutomationDell EMC Networking integration with DevOps tools such as Ansible helps simplify network deployment, improve uptime, increase configuration consistency, add capacity more easily, and reduce overall OpEx.The most common use cases for network automation will be Rapid provisioning, configuration management and deploying configs at scale. The 1990’s model of network provisioning through CLI and some TCL scripts will simply not work with the present web-scale networks. The below figure depicts how networks were configured before the advent of automation choicesNetwork provisioning usually involves a fair mix of the following tasks, infrastructure set up like DHCP, AAA and SNMP servers, switch deployment which includes racking and powering up the switch and switch configuration and validation. The network administrator is expected to build a configuration from scratch or copy paste previous configurations and edit it manually by hand to create the new configuration. This new configuration is built in a staged environment and then installed /shipped to its permanent location. This process does not scale and is highly error-prone, which makes fabric wide network validation a nightmare.What is Ansible?Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time.It uses no agents and no additional custom security infrastructure, so it’s easy to deploy – and most importantly, it uses a very simple language (YAML, in the form of Ansible Playbooks) that allow you to describe your automation jobs in a way that approaches plain English.Ansible and DellEMC IntegrationsDellEMC network devices and networking software can be automated through Ansible. DellEMC networking provides support for Ansible modules and Ansible roles to deploy and maintain OS10 and OPX offerings. The DellOS ansible role library can be found in ansible galaxy, which facilitates feature-specific configuration on devices running OS10/OPX including installing and upgrading software images on the network device.OS10 modules for Ansible dellos10_command: Run commands on remote devices running OS10dellos10_config: Manage configuration sections on remote devices running OS10dellos10_facts: Collect facts from remote devices running OS10OS10 Roles for AnsibleThere are 26 ansible roles available for OS10 and few of them are as DellOS-BGP, Dell)S- Image Upgrade, DellOS-VLT etc.,Key Benefits of Ansible Integration with OS10 SolutionBenefits DeploymentAnsible Integration reduces deployment time and operational costs needed to deploy a Data Center or campus network. IdempotencyAnsible modules are idempotent and this gets network device to the desired state without affecting the existing state. ExtensibleAnsible can be integrated into many existing DevOps workflows making network a part of the IT environmentcenter_img ScaleAnsible integration with OS10 can help automate network devices at scale by using template based solutions. AgentlessAnsible does not require a agent on the switch, so it can be run against any DellEMC networking devices SummaryIT transformation calls for networks that are reliable and can be automated at scale. Ansible integration with Dell EMC Networking enables networking devices to be part of DevOps operating model, making the networks more agile and reliable.It’s time to modernize the way to build, design and deploy networks by taking advantage of DevOps tools integrations like Ansible with DellEMC networking.For more information on ansible integration with DellEMC networking software, please contact [email protected] or send queries to [email protected]last_img read more

LuLaRoe to pay $4.75M to settle pyramid scheme lawsuit

first_imgSEATTLE (AP) — The California-based multi-level marketing business LuLaRoe is paying $4.75 million to settle allegations from the Washington state Attorney General’s Office that it’s a pyramid scheme. The company denied wrongdoing in a consent decree filed late Monday in King County Superior Court in Seattle. LuLaRoe sells leggings and other clothing to a network of independent retailers, who recruit other retailers to sell the company’s products. Attorney General Bob Ferguson sued the company and its executives two years ago, saying they deceived people about how profitable it was to be a LuLaRoe retailer. Ferguson said that $4 million of the settlement will be distributed to about 3,000 Washington residents who were recruited to the company.last_img read more

Danish ex-minister on trial for splitting migrant couples

first_imgCOPENHAGEN, Denmark (AP) — Denmark’s Parliament has voted to try a former immigration minister at the rarely used Court of Impeachment over a 2016 order aimed at separating asylum-seeking couples where one partner is under 18. In Tuesday’s vote, the 179-member Folketing overwhelmingly voted to try Inger Stoejberg, who served as integration minister in the previous government from June 2015 to 2019. The court will convene for the first time in 26 years. Stoejberg could face a fine or a maximum two years in prison. No date for a trial was announced. A parliament-appointed commission had said earlier that separating couples in asylum centers was “clearly illegal.”last_img read more