With the launch of Intel’s Third Generation Xeon Scalable platform based on 10nm Ice Lake processors, Intel has upgraded a part of the company that makes the BIG money. For the last few years Intel has been pivoting from a CPU-centric company to a Data-centric company, leveraging the fact that more and more of its product lines are built towards the datacenter mindset. With the launch of the new server platform in this past week, Intel is gearing itself up for an enterprise stack built on 10nm, PCIe 4.0, and cryptographic acceleration.

Alongside the new 10nm Ice Lake Xeons, Intel also has Optane persistent memory, 200 gigabit Ethernet, Stratix and Agilex FPGAs, high-performance SSDs with 3D NAND, Optane-based storage, AI hardware with Movidius and Habana, Tofino network switching, eASIC technology, and upcoming Xe graphics. This portfolio combined, according to Lisa Spelman, Intel’s CVP and GM of the Xeon and Memory Group, puts the company in a unique position of offering solutions that the competition can’t enable.

In this interview, we ask about Intel’s offerings, the scope of new accelerative features, what really matters to Intel’s customers, and how Intel is approaching its roadmap given the fast follow on from Ice Lake to Sapphire Rapids.


Lisa Spelman
Intel

Ian Cutress
AnandTech

Lisa Spelman is categorically what I call an ‘Intel Lifer’, having spent almost 20 years in the company. Her role has evolved very much into one of the faces of Intel’s Data Center business and the Xeon product portfolio, discussing technical aspects of the product lines but also business and marketing strategy in detail. Her previous roles have included being an internal analyst, technical advisor to the CIO, the Director of Client Services, and Director of Datacenter Marketing.

 

IC: Ice Lake Xeon is finally here with its Sunny Cove core design. There are optimizations for the server industry, for AI, up to 40 cores, new security features, and higher memory performance. Historically Intel has taken its Xeon Scalable processors to every corner of the market - is this still true with the new Ice Lake Xeon platform?

LS: I think you’ve hit on some of those key features and benefits. What we’re doing here is we’re updating the platform, we’re updating the processor, and we’re updating a bunch of our portfolio all at once, which we think delivers a tremendous amount of customer value. To your question about having Intel in every corner of the market - we think it is a differentiator of ours. Where we are really focused starts with hitting edge workloads and delivering through the network, and driving further network transformation onto Xeon [from edge to cloud]. We’re delivering in the core of the datacenter, and we’re delivering in cloud and high performance computing. We’re continuing to seek to expand the services and capabilities that we can offer for customers, and just deliver platform consistency across so many of their use cases.

[Note that] I never call Xeon a server product! I worked so hard to get it changed into being called a true datacenter product and now it’s actually even extended out beyond that, to sitting in so many edge and ruggedized environments. So I take great pride and joy in seeing a Xeon on every base station and on every telephone pole!

IC: These extra markets that Intel plays in, are they a big opportunity for revenue growth? It’s been a big feature of what previous CEO Bob Swan and what new CEO Pat Gelsinger has been saying recently.

LS: It’s a revenue opportunity and it’s a customer opportunity. It allows us to address more of their needs, and actually it allows us to play a more important role in our customer success. We’re not just in the datacenter, we’re spanning all of the ways in which our customers are seeking to drive their own value, their own monetization, and so that’s actually some of the funnest stuff to be part of.

 

IC: A lot of focus over the past generations has specifically been on the core performance on common everyday workloads for server customers. Intel is now seemingly focused on solution delivery, enabled through the other parts of the Scalable ecosystem such as Optane DCPMM 200 series, new 800-series Ethernet, Optane SSD and 3D NAND, Agilex FPGAs, eASICs, and the software stack behind DLBoost and OneAPI. Can you go into how customer demand in the ecosystem is shifting from simply a core purchase to a solution-optimized purchase that goes beyond the core?

LS: I still describe the Xeon as the most general purpose of all the general purpose solutions! It can literally run anything, and it’ll give you a good out of the box experience on everything. We have spent a tremendous amount of effort and resources in how we can improve in specific areas, and we do target those higher [market] growth areas. You mentioned artificial intelligence, which is an area where we’re investing both on the hardware side, which is super important, but the software is at least equal (if not more important) to tune for performance. [Enabling] that entire portfolio to deliver a solutions mindset has probably been one of our biggest changes, [especially] how to engage with our customers, and by looking at our [customers] and working with them on much more holistic requirements. So our requirements gathering has definitely improved and become much more comprehensive, and then we’ve built some solutions capabilities and solutions offerings on top of it.

We have talked about Market Ready Solutions, especially targeted to the Edge, and we talked about Select Solutions for cloud and enterprise and network and HPC use cases. We actually have similar types of programs that are less focused on the branding for our top cloud service provider customers as well, creating cloud engineering efforts that are focused on pairing those custom CPUs with the Optane performance, or the Ethernet 800 Series performance, and really helping them drive more value out of their infrastructure purchases.

An area where we’ve been utilizing the FPGA portion of our portfolio is in the move into smart NICs as well, which is a growing area of interest and gives us an opportunity to really holistically address our customer infrastructure management, as well as the monetization they want to do on top of the core.

 

IC: How important or relevant are raw core performance comparisons in the server industry?

LS: I still think there is a core of the audience and the industry that wants to hear that, and wants to see what it looks like. I don’t want us to walk away from utilization of those benchmarks. I’m also trying to drive the team [towards] the market that is pairing the benchmark with the actual real-world result when you use either just a CPU Xeon product or the entirety of the Xeon platform. We want to be able to [voice] that translation because customers are in different spots in their journey around a solution-style view. I think it’s important for Intel to continue to drive performance up, and we will continue to report on and discuss standardized benchmarks, but we’re adding into it a lot more of a holistic view of what the portfolio can do. So I think it’s important to meet our customers and our audiences where they’re at, instead of just dictating where we’re at.

 

IC: Intel’s server products have been using AVX-512 since the days of Xeon Phi, and then it was introduced on Xeon processors from Skylake. We are now also seeing AVX-512 roll out on the consumer desktop processors for the first time. There’s still some confusion as to how pervasive AVX-512 is, especially when it comes to Enterprise and Cloud Service Provider (CSP) code bases. What pick-up trends are you seeing with the feature, and how critical it will be as Intel progresses forward with platforms like Ice Lake.

LS: AVX-512 is such a great feature. It has tremendous value for our customers that use it, and sometimes I marvel when I read your audience’s comments about the raucous debate they will have about the value of AVX-512. But I tell you, and no joke, that a week and a half or so before launch I was looking at a list of the deal wins in one of our geographies. 70% of those deal wins, the reason listed by our salesforce for that win was AVX-512. Optimization is real.

What happens though, as with AVX-512 so far and with SGX in the future, is that when you launch and announce something new, despite our 10,000+ software engineers and our coverage of the ecosystem, it can be hard to get all of the stack to take advantage of one of those hardware features. It’s really hard to get that completely enabled. It has taken us years [for AVX-512] and there is more work to be done.

So has every customer that could benefit from AVX-512 had access to that because their software stack is completely ready? No. But we have a tremendous amount [of more customers] this year that weren’t able to utilize AVX-512 in 2011, or 2012, or 2015, or 2017. Enablement just keeps growing, year on year.

If we link it to Software Guard Extensions (SGX), we now have the larger size enclave available on Xeon Scalable. We’ve had SGX in the market on our Xeon-E product line for a few generations, and that has allowed a tremendous amount of industry preparation and adoption. It also allows you to see who is really deeply interested and committed in building that confidential computing foundation. So now we are [moving the feature] into the larger socket, with a larger footprint, a larger enclave size, and people can start to actually scale their deployments pretty quickly because they’re ready.

So you look at cloud service providers that already have confidential computing instances, like Microsoft, IBM, Alibaba, Baidu, and then you look at these enterprises, like Royal Bank of Canada, PayPal, VISA, UCSF, German Healthcare - these companies are well underway to deployment because they have had the time and now they can move it into even greater scale.

It all leads back to how I think of how I’m working with our team on overall roadmap management. We have seen, and we are driving, a shift over these past few years to focus more concretely on meaningful feature advancement, [rather than simply] a laundry list of features. We are much more focused and prescriptive about looking at our roadmaps. If we add a feature, we have to consider what it would take to have that feature utilized, and how would it benefit a customer, rather than simply what it takes to put it in the hardware. That’s a shift in an Intel mindset and Intel thinking, but I think it will benefit not only ourselves, but our customers as well. They will see more stuff [over time], and it will be more ready in the market when it arrives.

 

IC: SGX, Software Guard Extensions, is something you have had in Xeon-E for a while. Now that the feature is moving up into the wider Xeon stack, Intel has segmentation with enclave size. Some SKUs have 8 GB maximum enclaves, some doing 64 GB, and some doing 512 GB. Can you speak to reasons why this segmentation matters to Intel, but also can you speak about other security elements that the platform is enabling?

LS: On the enclave size segmentation, this is about giving customers choice and opportunity. They have the opportunity to think through what type of deployment that they are going to need, what that type of enclave size [they will need], and what type of data they are trying to manage. [It is about] what type of applications are they trying to more robustly secure, and this segmentation gives them that optionally.

[It is worth noting that] it is the first time we are bringing that choice into the market, and so we’ll learn. We will get feedback from our customers, and those with which we have worked with in our ecosystem of cloud and OEM partners as we build out what the SKU stack looks like. But now [with the new feature] we will get all of that real-world customer deployment information, and then we will make adjustments as we go forward. You’ve been through enough of these launches with us, and seeing how we launch new features, where we start with an effort and then we refine it over time as we see and address the market adoption. So we think this will give customers a great foundation to start with, and we will see even more customers transition their existing PoCs (proof of concepts) from Xeon E to Xeon scalable deployments. We will also see a new wave of proof of concepts for customers that were waiting for a little bit more of that true market readiness with a two socket system. So I think that it will give us an opportunity to learn and grow in the market together [with our customers]. So that is one of our biggest security advancements - adding that hardware based addition to our customer’s software security portfolio.

We are also adding further crypto acceleration, and this is an area that Intel’s been invested in for several years to a decade or more.  We do see it as a true differentiator for us. We have a brilliant team that focuses in this space, and works on both the hardware and the cryptographic algorithms as well software support for the hardware. It’s also an area of really tight collaboration with our customers. As you heard me say at [Ice Lake Xeon-SP] launch, the race is on to get as much data as possible encrypted, and keep it under greater security. I think that this is going to continue to build upon very important feature sets, and enable greater capability that our customers benefit from already.

 

IC: Speaking about the customers, historically when Intel comes out with a Xeon platform launch, it scales all the way from one socket to eight sockets. We’ve now got a situation where with 3rd Generation Xeon Scalable, we have a split, where you’ve got Ice Lake for up to two sockets on 10 nm, and Cooper Lake on four to eight sockets on 14 nm. Can you explain reasons for splitting the markets? Are there technical limitations, or is it customer demand, or is there something else in the mix? Is this indicative of what we might see in the future, or will Intel be coalescing back again?

LS: I think that we are building an even greater flexibility in our portfolio. You will see a lot of optionality that we create with some of the stuff CEO Pat Gelsinger talked about at Intel Unleashed a few weeks ago. Pat talked about [being able to] meet customer needs that might not be quite so specific. [In discussions with our customers,] we had an opportunity to refresh and bring forth some new capabilities in that four socket and above space ahead of Ice Lake. We made the choice to take advantage of that opportunity, add some new capabilities (like BFLOAT16), and meet some very specific high volume customer requirements that we had in that space ahead of getting Ice Lake out into the market.

Given the size of the traditional four socket enterprise market, and the specific customers, doubling our investment in Ice Lake or 3rd Gen Xeon Scalable to enable another four socket platform so quickly seemed like too much to manage for the value that platform would offer. So we had delivered the right capabilities with the first portion of the 3rd Gen portfolio (Cooper Lake) for a refresh to meet a couple of really big customer requirements. [This enables us] to have the ecosystem focus on their Ice Lake two socket systems first, but then moving on to our next generation of Sapphire Rapids which will cover the whole of the stack.

So [Cooper Lake] realized an opportunity we had, driven by customer demand. It was not an eternal strategy to separate the [dual socket and quad socket markets], but as we increase our ability and pace to bring new technology to market, we may still look at how often the traditional four socket market needs that level of refresh.

 

IC: Speaking about those select customers, in the past at the Data Centric Innovation Day, I remember a slide which had a pie chart indicating all the Skylake Xeons that Intel makes. More than half of them in that graph were customized to a specific customer needs, especially these big high profile customers, the cloud service providers and the hyperscalers. Would you say that you’re still 50%+ customizing for those sorts of big customers? Can you talk about how that has evolved?

LS: It’s still more than 50%, and I would say that you will continue to see that grow. What we’re also trying to do is add a level of customization for all those markets.

Across the main roadmap of SKUs you will see the SGX enclave size opportunity, but we also have SKUs that are Cloud optimized for VM utilization, we have Networking and NFV  optimized SKUs, the media processing optimized opportunities, and some long life and more ruggedized use case processors for edge workloads. So we’re trying to provide a level of customization even within the main SKU stack for those that have that kind of precision targeting of their infrastructure for their workload.

We do still deliver 50%+ customized parts, and it will continue to grow our customized volume. Over time we have grown our capabilities in this space, and I have a separate team for this that we’ve invested in that manages our custom SKUs with our major customers. This allows them to really tune and target for their environments. It started out as a couple of knobs to tune, and now it has grown and we have the capability and ability and willingness to do full on customization up to the IP level.

What we do with these SKUs, when we manufacture them, we create them, and as we deploy them, we actually leave it up to our customers for how much of the detail that they want to expose [to the press or customers]. That co-optimization is often a lot of their own significance in how they differentiate themselves against competitors. So we consider that a lot of their ability to determine how much they want to share with the world. For Intel, we focus on the main performance benchmarks, and everything we do is on the main line available to everyone.

IC: So it’s interesting that you bring up the new separate SKUs. Beyond the network-focused ones, which you had in the previous generation, you now have a media focused SKU. What’s that all about? What’s that got in it? What is special?

LS: If you think about the workload, we’re trying to target the right core count, the right frequency, and then the mix between single-core versus all-core turbos. It’s also about how much cache is in there, and what the thermals look like for those customers.

In this case, across the media processing and optimized workloads, we have a team that is focused on this workload and these key customers and as they gather requirements. Requirements come from the CSP operators, online gaming operators, and all of those types of customers. [We want to enable] the most ideal configuration for guaranteeing quality of service and allowing for as much performance to be delivered before moving potentially to offload accelerators. So for those are the hardware configurations we have those knobs we might turn to create that SKU.

The second thing we’ll do is we focus our software work for them. So as we do encoding support, software support, we try to make sure they’re updated to take advantage of that specific SKU and that configuration. You will obviously get benefits across any SKU you choose, but we’ll do the greater tuning and optimization for code targeted on this processor. So on a standard benchmark you might get X performance, but if you take the media SKU version against that, you will likely get X plus a performance boost, could be 10%, depending on the exact case.

IC: What’s this liquid cooled processor in the product stack all about! Is that for HPC?

LS: You never know who is out there that’s going to experiment as well! But yes, definitely focused that one towards HPC. I still haven’t personally had the opportunity to dip my hand in a tank of something and pull out a Xeon, but I’m looking forward to it!

IC: Sorry when I say liquid cooled, I thought it meant some sort of like a closed loop liquid cooler. You’re talking about immersion cooling?

LS: We have customers that are looking to do all of the above! So we have a customer in every corner of the earth that wants to push the boundaries on all of it on everything. The place where we see the most traction and interest on this is in the high performance computing community. As their capabilities have continued to progress and grow, we have some of our OEMs that have built up a lot of specialization in this space, and use this CPU for their differentiation. We want to support that.

 

IC: As we see the Xeon mainstream server platform moving forward, at the same time Intel is also talking about its enterprise graphics solutions. So is there anything special in Ice Lake here that gets enabled when the two are paired together?

LS: We are definitely working on that. I don’t have a ‘3rd Xeon Scalable Better Together’ slogan or anything right now, but as we look out towards Sapphire Rapids and beyond, it is definitely something that we’re working together on. We absolutely want that portfolio value to show up when you put an Intel Xeon with an Intel GPU. One API as you know has a software foundation for it, and then between my team and Jeff McVeigh’s team, and in Raja’s organization, we will work to make sure that we are delivering on that value promise. It’s too big of an opportunity to miss! But I have nothing to reveal for you today!

 

IC: Intel has historically enabled three silicon sizes for its Xeon products: a low core count for entry, a mid-core count for the bulk of the market, and a high core count for compute or cache optimized solutions. Can you describe how Intel is segmenting the Ice Lake silicon this time around, and mention any special segments that customers should be aware of?

LS: You know it’s fairly similar in the sense that we obviously have our top highest core count offerings, up to the 40 core, and we have the higher thermal configurations that it can hit. We are driving towards that peak performance, and trying to really drive down latency while improving performance. We also have options that go down to 105 watts. We cover the whole range, like I said, and as you try to hit some of these Edge workloads, you want to offer a whole range of parts. We go down to 8 cores, up to 40 cores, and then like you said, a range of cache sizes in between.

I think what we’ll see is we will have customers that will remain on their 2nd gen platforms as well, because they are functioning well, and they might not be as high performance of a buyer as a purchaser. They may value the stability and continuity of staying with the platform instead of moving or refreshing, and we intend to support both of those types of customers in the market. I really think that the way the market has segmented and shifted, the day of that wholesale entire bulk pickup and transition is not happening anymore. But those that ramp the fastest are continuing to ramp really fast, and those customers are a very significant portion of the volume.

 

IC: I’ve done some rough die-size calculations of the biggest Ice Lake chips being offered, making them approximately around 620-660 mm2 - it’s a super large bit of silicon for Intel’s 10nm manufacturing processes. I know you announced 100k+ CPUs have been shipped already before launch, and 200k+ as we went through the launch. But with such a large die, how do you see roll-out of Ice Lake Xeon progressing to OEM partners and specialized customers?

LS: I promise I’m not going to be doing like a weekly blog post of the next number! So we actually view this one as going very similar to our other platform transition ramps, as far as rate and pace. We know we have pent up demand to meet [demands in] performance. We know that we have customers that are looking forward to productizing an Intel PCIe Gen 4.0 offering, and we know we have the platform value, with higher memory channels and such. All of that leads to that compelling performance that customers want to go out and get.

We’re planning for this to be pretty in line with our traditional ramp, and we have staged our supply to do so. The fact that the Xeon Scalable or an Ice Lake Xeon based die is a big piece of silicon, or the biggest one that Intel produces, is not a feature that’s new to Xeon. Big die sizes are my life! I am very nice to my factory friends, as I ask them for their best yields and highest volume output. What we’re facing right now is worldwide constraints on chip manufacturing and in chip components, so we are trying to use our strategic supply chain capability to address that, and we intend to support the whole of the Ice Lake offering as we ramp. I think we’re actually in a really good position to capitalize on the fact that we are an IDM, and have that IDM advantage.

 

IC: Intel is already making musings about next generation products, especially with the deployment of processors like Sapphire Rapids in the exascale Aurora supercomputer, which is happening at the end of this year or beginning of next. I know you probably won’t talk about Sapphire Rapids, but perhaps you can explain how long-lived the Ice Lake platform will be for customers looking to deploy systems today?

LS: I actually see Ice Lake as having an opportunity to live for quite a long time into that Sapphire Rapids generation. I talked about it a little bit earlier, that [it’s not solely about] wholesale moves.

If I think back to starting on the cloud service provider journey with these major customers, 10 years ago, at that time they wanted the lowest performance 1U servers. If anything happened, they threw it out and then when we launched the new one, they threw them all out anyway and put all the new ones in. It has changed so much - cloud service providers have moved into true Enterprise-class requirements. They are our highest performance purchasers, they move towards the top of the stack, and they keep products and production longer. [This is true] especially as an ‘Infrastructure as a Service’ provider, where if they have a happy customer, why would they disrupt the happy customer? So they have that need to have a bit of a longer life on the platforms we provide.

At the same time, whenever there’s something new that they can use to differentiate [their offering] and create new services, or create value for their customers, they’ll pivot and move towards that. I think what will happen is that a lot of our Ice Lake customers will just continue to offer them now as premium services and fill their stack, then apply Sapphire on top of it. A lot has been made about how close together they are, but this actually is kind of the representation of what it looks like as we move closer towards that four and five quarter cadence that our customers want. So there’s a lot of market chatter as to if they will be too close together, but we really see the Sapphire Rapids timing similar to Ice Lake timing as having that opportunity for customers to ramp one and then start applying the other one on top. So we’ll have to work through that with the industry, but we’ve got a good team focused on it.

 

IC: Ice Lake has launched a lot later than Intel originally planned, there’s no denying that, at a time when there is increased competition in the market from both other x86 designs as well as new Arm designs. While the late launch isn’t ideal, can you describe areas of the supporting portfolio that Intel has been able to accelerate to aggressively support the Ice Lake launch to increase its competitiveness?

LS: For a Xeon launch, the important thing is not that you hit a specific date or a marketing moment, it is that you have your ecosystem ready to go. That is what we’ve really been focused on, rather than arbitrary date management. Then again we do intend to capitalize on and continue to build value around the remainder of that portfolio that you mentioned, whether that’s the Optane SSDs, the Optane Persistent Memory, FPGAs.

I enjoy quite a bit being with my peers and discussing the deals that are actually being won with Intel, and even held at times with customers based on platform components. So I mentioned ADQ for the Ethernet 800 series - , we have customers that the Xeon Plus ADQ capability and feature is so important to them that even if they were interested or viewing a competitive CPU silicon architecture, they’re sticking and staying with Xeon because that ADQ plus Xeon solution makes such a big difference in their total cost of ownership or their application outcome. We see the same thing with Optane Persistent Memory, and I alluded to some of the work we’re doing with our FPGA based SmartNIC wins. It is such an advantage to have that portfolio, and it’s interesting to see others in the market that are pursuing to do the same. I’m grateful for our multi-year head start in that space, especially as we enter such an incredibly dynamic time in technology and in silicon development. There really is a lot going on right now.

 

IC: Intel is pushing Optane DCPMM 200-series with Ice Lake Xeon as a competitive value addition, however it’s hard to miss that the agreement with Micron has ended, Micron is looking to sell the Leti fab, and Intel’s only commitment to date has been R&D in the Dalian plant in China. Can you speak to Intel’s vision of the future of Optane, especially as it comes to product longevity on Ice Lake and Intel’s future?

LS: I think we are on the cusp of so much really great opportunity and continued customer momentum on Optane, and I’ve talked to you before about our proof of concept conversion into revenue. We’re at 80%, and that’s just tremendous for a new-ish technology out in the market, and so I’m really looking forward to getting the 200 series out into customers hands. We’ve got a great pipeline [for Optane], and we’ve got a lot of focus on building that ecosystem, just like we had to do on AVX-512, just like we’ve had to do on SGX, and just like we’re doing here.

I know that the Micron stuff was news in the ecosystem, but I have confidence in Intel’s ability to navigate and manage through manufacturing challenges or changes in agreements like that. I guess I’ll just say that particular part of it is not something that I think our customers need to worry about. I think we have a lot of optionality in how to produce and manufacture and deliver the Optane product. So I’m spending like 100% of my Optane time building customer momentum, making sure they’re seen achieving and realizing value from it, rather than working through the challenges on the manufacturing side. I think we’re going to end up in an OK position there.

 

IC: Intel CEO Pat Gelsinger in the recent Intel Unleashed spoke about Intel moving to a tiling concept with some of its product lines. It’s clear that the future is going to be with chiplets, tiles, or just bunches of silicon using advanced 3D packaging, especially for markets that deal with high-performance complex silicon products. Can you dive down into how this is going to affect Intel Xeon, the team, the research, and the future?

LS: It is - it’s good, it’s exciting, and it is aligned with our future. Before Pat had finished his first week [at Intel], I already had an opportunity to spend about two and a half hours with him just going over what we’re doing on Xeon - a few things have changed since he was last here! But I had a chance to sit down with him and walk through our long term strategy - where we are at today, what we are working through today, what are we setting ourselves up for in the future, and how are we addressing these customer needs. It was great to get his initial feedback and his reactions. He’s got a customer view that is very valuable to us, coming not only from his silicon background experience, but his last 11 years of really dialing up his software knowledge I think is going to help us so much. The good news was walking out of that meeting. The kind of view from him is that we are on the right track, and we’ve got the right strategy, so now it’s just a desire to push us faster, harder, sooner, better, stronger - all of that good stuff. So I think that Pat’s view of our vision, and this Xeon team that we’ve built over the past couple of years, is really closely coupled.

So then to your point, we are going to take advantage of all of those unique and differentiated packaging capabilities in order to deliver in Sapphire Rapids, and deliver in Grand Rapids, and beyond that to some really exciting stuff that we’re working on that allows us to use even more of the cores, even more of the portfolio, together. So it’s definitely put in there.

The other thing I’ll say is that over the last year and a half or so, we’ve restructured how we’re building Xeon. It was necessary as we faced some challenges that are well discussed around our 10 nm in the industry. I had an opportunity to move out of more traditional marketing role into having the control of product management and the IP planning and overall roadmap structuring to build this cross company team - the Xeon Leadership team.

I’m really excited about this group of leaders, and the people that we have pulled together that I think represent best in class for the industry. The work that we’ve done in the last 18 months to kind of reset the foundation of where we’re going I think will deliver a lot of customer value. It’s going to give us an opportunity to take advantage of EMIB or Foveros or others on the horizon faster. It’s going to make us more competitive across all types of silicon. It’s not just x86 competition or any singular one market, and I look at the way we’ve capitalized on a diversity of thought and a diversity of talent.

I’m excited to have leaders like Gloria Leong, VP GM Xeon Performance Group, running all of our silicon design. We have Karin Eibschitz Segal, the VP Design Engineering Group, GM Intel Validation, who’s running our validation. We’ve got Rose Schooler, CVP Data Center Sales and Marketing Group, running all of our sales, and I’ve got Jennifer Huffstatler, GM Data Center Product Management, running strategy for us. Rebecca Weekly, VP and GM of our Hyperscale Strategy and a Senior Principal Engineer of DPG running our cloud for our major cloud customers. We’ve got Niveditha Sundaram, VP Cloud Engineering running our cloud engineering, and Nevine Nassif, Intel Fellow, Sapphire Rapids CPU Lead Architect, who is running our Sapphire Rapids programme. So you might notice a theme with those! I also have wonderful awesome male colleagues that are driving this as well - we’ve got Sailesh Kottapalli, Intel Fellow and Ronak Singhal, Intel Fellow.

But this core group of women that I’m really excited about in leadership and technical badass roles out there. They are arm and arm with me and with our partners to crush it and bring so much performance and capability and commitment back to Xeon. I’m excited! So I say to the engineers of the world that if you’re looking for a great team that values diversity of thought and ideas, and experience, and wants to capitalize on everyone’s unique strengths, then Xeon is the place to do amazing and cool work.

IC: That almost sounds like a job ad!

LS: Almost! I want all the people that have the best ideas that are really looking for a home to make a major impact - I want them to feel welcome here. Having that many amazing diverse technical leaders happens when you’re purposeful about it, and you put them in big roles, and you give them the opportunity and the platform to shine.

 

Many thanks to Lisa Spelman and her team for their time.
Many thanks also to Gavin Bonshor for transcription.

POST A COMMENT

95 Comments

View All Comments

  • flgt - Friday, April 16, 2021 - link

    Favorite part of the article was SGX enclave segmentation being about “choice” and “opportunity” for the customer. Reply
  • Spunjji - Friday, April 16, 2021 - link

    I don't know about you, but for me, the "opportunity" to be charged more for stuff that's already in the processor is one that I always value. 👍 Reply
  • flgt - Friday, April 16, 2021 - link

    Lol, the only opportunity here is for Intel to extract money from your wallet. Reply
  • nandnandnand - Friday, April 16, 2021 - link

    Did they just top up the older Xiaomi story to bump this one down? Yes they did. Internet Archive, archive.today, and Google cache don't have it, but Bing's cache does:

    https://cc.bingj.com/cache.aspx?q=anandtech&d=...
    https://archive.is/Jx9K1

    It might be an automatic thing, but it's still kinda funny.
    Reply
  • Ryan Smith - Friday, April 16, 2021 - link

    "Did they just top up the older Xiaomi story to bump this one down? Yes they did"

    Yep, I did. We had another piece scheduled to run today, but that was pushed back due to some delays. So I put the Xiaomi story back on top, as I'd rather heave the lead piece be a hardware review than an interview.

    It's nothing that I try to be secretive about. We can shuffle the top stories arbitrarily, and often do so to keep things like reviews and major announcements at the top (ahead of things like live blogs and interviews), even if they're not the most chronologically recent.
    Reply
  • nandnandnand - Friday, April 16, 2021 - link

    Thanks, I've wondered about the story ordering a couple of times. Reply
  • croc - Saturday, April 17, 2021 - link

    You can't blame the interviewee for playing 'softball'. Reply
  • SoLoFoNo - Saturday, April 17, 2021 - link

    Absolutely and exclusively an advertising campaign from INTEL, and again via smoke candle: INTEL has always sent this lady ahead when they wanted to distract from the incompetence or rip-off of their customers. Whether it's because of her smile or whatever (...), I can't say. But if she had the slightest bit of self-respect, she wouldn't go along with it.... But anyway: she can - obvously it brings her a lot of money - BUT ANANDTECH MAY NOT SUPPORT BY PUBLISHING SUCH CAMPAIGNS or SMOKE CANDLE ACTIONS FROM ANYONE!!! Reply
  • Oxford Guy - Monday, April 26, 2021 - link

    'But if she had the slightest bit of self-respect, she wouldn't go along with it..'

    Um... corporations aren't about that. They're about 'making' money. Advertising/marketing's entire purpose is to fool people into paying more for things than what they're worth.

    It's always amusing to see people post from the fantasy point of view that corporations and their employees are doing some sort of altruistic exercise. All of the social benefits of corporations are the scraps we get in return for them taking more than us.
    Reply
  • Oxford Guy - Monday, April 26, 2021 - link

    They're basically a wealth redistribution scheme, a form of tricky regressive taxation. 'We' see it as a positive system because 'we' benefit from it (our 'benefactors' benefit mainly). Reply

Log in

Don't have an account? Sign up now