Comments Locked

21 Comments

Back to Article

  • ShizNet - Monday, September 12, 2005 - link

    ...25 such systems will cost around $35000, which is on the low end of the salary scale for an IT worker. ... you to cut at least one IT position per 25 computers

    NICE advise!!! that's must be the BRIGHTEST idea u came up with

    who do you think reads your webSite?
  • Doormat - Monday, September 12, 2005 - link

    Cutting 1 IT worker is a LOT of money. $35,000 doesnt do anything, when you start to consider any health benefits, pensions, etc. Usually, you have to double the salary to find out what any one employee really costs the company, from overhead of cubicle space, electricity used, benefits, etc, PLUS salary.
  • JarredWalton - Monday, September 12, 2005 - link

    As I said, the low (VERY low) end of the IT salary range. How much would Joe Computers charge to build and assemble 25 systems? How long would it take? How long will Joe stay in business? 3 year warranty and 17" LCD plus the rest of the computer for $1400 (including XP Professional) is a very good price for a corporation. I'm positive that the Wal-Marts of the world don't really care about whether or not Intel and Dell make the fastest PCs.

    Let's say a business formerly had 10 IT workers supporting 150 users and PCs (not unheard of in the business), and they switch to Dell and eliminate six of those IT people. They may end up with $300,000+ a year in additional budget for computer costs. That would buy them brand new PCs every other year, or else they could upgrade over time - just replace older PCs with a new model when necessary - and end up saving on $200K+ in IT costs yearly. That's a small amount for a large corporation, but everything adds up over time.
  • ShizNet - Tuesday, September 13, 2005 - link

    it's all good as of - Coulda, Shoulda, Musta been...
    but in real world - some ppl r SO stupid they can't even create folders on their PC.. and don't even start me on CEO level - those ppl need PERSONAL tech 24/7 - ppl who's been around the block know what i'm talking about.
    so PC worth crap in 2 yrs. and tech jobs don't get easier w/every ServicePack and patch. who do u think will service those dinos in 5-6 yrs. if u trade them for tech with your BRIGHT solution????

    next time find better analogy
  • TrogdorJW - Wednesday, September 14, 2005 - link

    Some ppl r so stoopid that they cant spell and complain about others that r smarter than they r....

    Four IT techs can support 150 people quite easily. I did that at college, where it was actually one supervisor and two techs supporting the entire HR department. For a university of 30,000+ students, the HR staff gets quite large. We had at least 150 PCs at the time, although we used Micron instead of Dell. We would typically order about 10 to 20 new PCs a year, they would be installed at the locations that needed the additional processing power the most, and everything else would shift down. We'd then retire a similar number of PCs to the grave yard (i.e. recycling center).

    The "bright solution" Jarred mentions is pretty much how most corporations run things. So, who has the better grasp on the way the market works: a person describing how corporations actually work, or someone whining because they don't like reality? Try to broaden your perspective on the world a bit, ShizNet. To much shiz in your head right now, I guess?
  • Questar - Monday, September 12, 2005 - link

    You may not like it, but it's true. The fewer people needed to support a system the better.

    Really what value to a business does a tech provide? Does he increase revenue? Reduce costs of products or services? Increase shareholder value?
    No. A tech is nothing more than additional overhead.
  • JarredWalton - Monday, September 12, 2005 - link

    FYI, I work as a tech at a large corporation. There's a reason we can have 4 technicians supporting phone, network, 150+ PCs, etc. We still have a lot of down time, but there is job security in not having 20 IT workers at a location. Of course, HQ has a ton of computer people running most of the server stuff, but you still need a few onsite technicians.

    If I were offered a job as a computer tech supporting a company with only 10 to 20 PCs, I'd be concerned about what would happen long-term. Setup a place properly, and there's not much to do other than sit around waiting for something to go wrong. You either get people trying to expand your job functions (to "better utilize resources"), or else they start having you "train a backup" who functions as a regular employee.

    Anyway, I'm simply reporting how big business usually functions (in regards to IT). Is it good, bad, right, wrong? That's not the point; this is - as far as I can see - how corporations view the PC market. They want it to work, and they want to spend as little as possible getting it to work.
  • IntelUser2000 - Monday, September 12, 2005 - link

    Jarred, are u sure about the intro date of Q2 2006 on the later Montecitos??
  • JarredWalton - Monday, September 12, 2005 - link

    That's what shows up in the PDF I have. It could be June 30th for all I know, or April 1st. Delays are also possible, as there is some question of 667 FSB support with Itanium. They show stuff like "667 Enabled FSB", but in the past FSB speed ramps for Itanium have been slow in coming. I also don't see any mention of RAM type for the Montecito update. I'm guessing it's still DDR, but DDR-200 is listed under Q3/Q4'05 and nothing shows up under the later quarters. Heh... odd. Maybe we'll get FBD on Itanium as well, sooner rather than later? (Don't quote me on that!)
  • IntelUser2000 - Wednesday, September 14, 2005 - link

    Intel won't make faster chipsets until 2007, when Tukwila is out. They cancelled the original 667MHz FSB chipset for Itanium. I dunno why, I guess its the validation time, or something else, but cancelling that chipset was one of the most stupidest thing to do, as Intel wouldn't rely OEM companies for performance. Intel will rely on companies like SGI, HP, Hitachi for 667MHz FSB enabled chipsets.
  • IntelUser2000 - Monday, September 12, 2005 - link

    Itanium either supports hardware emulation OR software translation. The difference between emulation and translation may seem to be minimal, but translation has much better performance than emulation. While the hardware emulation just emulates instructions, the software translator dynamically optimizes the code on the fly to improve performance.

    Hardware emulation is NOT present on Montecito in favor of IA-32EL(software translation)
  • IntelUser2000 - Monday, September 12, 2005 - link

    The MAJOR difference betweeen Foxton and *OTHER* dynamic overclocking is that Foxton is implemented on HARDWARE, while other dynamic overclocking is based on SOFTWARE.

    I guess you guys may refer to the dynamic overclocking by MSI by D.O.T. or the one in ATI Catalyst driver. But they are software based. 30 million of the LOGIC transistors are dedicated to JUST Foxton technology.

    Foxton isn't just dynamic overclocking. If the power consumption exceeds the set threshold, it clocks the CPU down until its equal or under the threshold point. Unlike conventional overclocking, Foxton FINDS the right point where it won't damage the CPU, while providing the maximum clockspeed the design can provide.

    OCing Prescott to 6GHz is not safe point, BTW.

    Foxton responds extremely fast on demand and power consumption. The hardware feature for Foxton is extensive for power management, basing it on power consumption, temperature, workload.
  • JarredWalton - Monday, September 12, 2005 - link

    Good points, and obviously I wasn't trying to get into the deep details of Itanium. I have a question for you, though, as you seem to know plenty about Itanium: Intel currently has IA-32EL; is there an IA-EM64T-EL in the works? (It might be called something else, but basically EM64T emulation for Itanium?)

    Even though Foxton is hardware based, we still don't know how it actually performs in practice - at least, I don't. (I probably never will, as I haven't even used an Itanium system other than to poke around a bit at some tradeshows.) 955 can run as high as 2.0 GHz under load - in practice, can you actually reach that speed most of the time, or is it more like 1.80 GHz for a bit, then 2.0 GHz for a bit, and maybe 1.90 GHz in between?

    Also, are you sure about the "30 million transistors" part? That's larger than the entire Itanium Merced core (not counting the L3 cache). I suppose if you're talking about all the debugging and monitoring transistors, 30 million might be possible, but I didn't think all of that was lumped under "Foxton"?
  • IntelUser2000 - Monday, September 12, 2005 - link

    I think there is plan for EM64T extension to IA-32EL. I heard from Inquirer that Montvale may have that, but either I could have misunderstood it/or its a rumor. Its just software support so I guess Intel can put it whenever they want to.

    For Foxton speeds, it depends. From what I understand, there is a thing called a power virus(A power virus is a malicious computer program that executes a specific instruction mix in order to establish the maximum power rating for a given CPU.), and if a number for power virus is 1.0(meaning 100% of maximum power), for Linpack its 0.8, specfp2k is 0.7, specint2k is 0.65, TpmC is 0.6. Since TpmC is furthest away from the power virus figure, it would reach maximum speed all the time, for 9055, that is 2.0GHz. For speccpu2k, it may be 1.9GHz, and for Linpack it may be 1.8GHz. So for some programs, there may be no benefit AT ALL, while others may get the maximum.

    Foxton can sample every 8uS to change voltage and frequency.


    Yes, I am sure about the Foxton hardware transistor count part. It uses custom 32-bit DSP with its own RAM to process the data necessary for Foxton. I was sort of surprised but yeah, around 30 million. Sorry I couldn't give the link, I'll send you somehow, give me info of how, but I do remember clearly. Merced has 25 million transistors including 96KB L2, without it that's around 20 million I guess, but Mckinley is actually simpler and has less logic transistors than Merced, which according to some, its around 15-17 million transistors.

    Montecito has 64 million transistors NOT including L2. 64-30=34 million/2=17 million transistors, which is right on mark for
  • IntelUser2000 - Wednesday, September 14, 2005 - link

    http://66.102.7.104/search?q=cache:fZ7OTmmmXrgJ:ww...">http://66.102.7.104/search?q=cache:fZ7O...f+1.7+bi...

    Well, I was KINDA right.

    quote:

    Hewlett-Packard declared. 30 million transistors, as many as are in a Pentium II, are responsible solely for power management


    Though, yes that doesn't mean they are all for Foxton. Maybe, I don't know.


    Itanium Merced has 25.4 million transistors. ~6 million of that is dedicated to x86 hardware emulator. Which leaves with 19.4 million transistors. W/O including 96KB L2, it would be around 14-15 million transistors for Merced core logic.


  • IntelUser2000 - Wednesday, September 14, 2005 - link

    OTOH, I think the site could be wrong. It doesn't make sense with other Montecito papers saying it consumes less than 0.5W and takes less than 0.5% die size. I give up haha.
  • Jimw18600 - Monday, September 12, 2005 - link

    Your definition of HTT is a little skewed. It doesn't enable processing multiple threads; that was always there, whether they were earmarked or not. What it does do, is instead of flushing the instruction buffer back to the missed branch, it restarts the broken thread and continues the rest forward. Broken threads are simply tossed out and resources are reclaimed in the last stage in the pipeline; completed threads are retired. And by the way, the reason Intel was forced to go to HTT was they were heading for 31-stage pipelines. If you were still back at 12-15 stages, HTT didn't have that much to offer.
  • JarredWalton - Monday, September 12, 2005 - link

    My definition of HTT was actually taken directly from the roadmap. That's how Intel describes it, and obviously a 1 sentence summary leaves out a lot of details. HTT does allow the concurrent execution of more than one thread, but resource contention makes it difficult to say exactly how HTT will affect performance.

    One interesting point about SMT in general is that POWER5 doesn't have 20 to 31 pipeline stages and yet it still benefits from the IBM SMT design. This is purely a hunch on my part, but I wouldn't be at all surprised to see some form of HT come out for Conroe/Woodcrest in the future. Trouble filling all for issue slots from one thread? SMT could help out. We'll see if Intel does that or not in the future.

    Note: HTT was actually present (but disabled) since Northwood for sure. Some people suspect that it was actually present in an early form in Willamette. Just because Conroe doesn't currently show any HT support, doesn't mean there's not some deactivaated features awaiting further testing. :)
  • IntelUser2000 - Monday, September 12, 2005 - link

    From what I understand, modern single thread processors like the early Northwood P4's can execute multiple threads, but not ALL simultaneously. Since today's processors are fast enough anyway, it SEEMS like multi-tasking. The OS decides how to devote the time to the CPUs I guess.

    HT, makes use of the otherwise idle units, since it will give basically double demand to the CPU. None of the thread can make full advantage of the CPU(say 15%), but second thread makes it more efficient by taking 20% advantage of the CPU, which is 33% better throughput. It is more complex than that, but I think that explanation is enough.

    Power 4/5 issue rate is 5-wide, which is quite a lot. It also has 17-stage pipeline, which is close to Pentium 4 Willamette/Northwood. Wide and deep, with lots of bandwidth and enough execution units, its perfect for SMT.
  • coomar - Monday, September 12, 2005 - link

    kind of difficult to read the confidential

    virtualization sounds interesting
  • yorthen - Monday, September 12, 2005 - link

    Yes, I've heard a lot about VT and AMT but never found any good explanation of how it actually works. I understand that using VT will enable Xen to run unmodified OSes, but what is it that VT does that a normal processor can not, and how does it compare to AMD's virtualisation-technology?

    And what about AMT, which is supposed to provide OS-independent management capabilities, what kind of operations does it allow?

Log in

Don't have an account? Sign up now