not really, if the architecture of the chip is the same then you can compare clock speed, but if not then all comparisons of that nature are out of the window, and that's what AMD are trying to make people realise after intel's military campaign of getting people to think that xMhz=xSpeed.
Take the risc chips for instance, they have an incredibly low flop count for what they do compared with... any other chip! and those motorola chips that apple use, they're risc based, and so they have a lot better flop-to-stuff ratio than an intel does.
This all has to be taken into consideration. Regarding whether an intel or an AMD performs better, it depends on whether you are processing integers or floating points. Most windows programs are probably expecting an intel chip, so they will try and convert everything to integers to make it run faster, and if you are using an AMD chip (better at floating points) then you will notice a slight performance dropoff. Run a program with all floating point calculations though, and check the difference.
This is all aqcademic though, personally i have never been too interested in comparing chip speeds...