All Things Microsoft > Microsoft Software
How to make your Windows machine more stable and secure
muzzy:
Oh my, that's a very very long post indeed. Do I have to answer all of it? :o
muzzy:
I think I'll start by addressing some of the more relevant things that are quick to answer. A lot of your response just seems to be a series of quick stabs, and I'm not sure if there's much to be gained by returning a similar series of quick stabs.
Regarding W9x-series, I don't want to think of it as a Windows because it differs so greatly from the NT-series. The NT series has a serious and stable native kernel running it, with Win32 Executive Subsystem on top of it implementing the win32 user environment. On the other hand, w9x is just a pile of black magic and hacks. Although it does work to an extent, and is an usable system for some purposes, the system design is a joke. Everything in it has been designed for backwards compatibility, and compromises have been made in the very core design. I don't like it.
What I said of all operating systems being equally vulnerable by design, I was referring to the minimum privilege principle, and how badly it applies in every system when it comes to practice. In an ideal system, applications would manifest what they need to see from the filesystem, what libraries they need to access, what system apis they need to call. Then, anything not requested would be plain and simple blocked, out of sight completely. Modern filesystems supports ACLs to do privileges on user-based granularity, but I'd be more interested in a process-based granularity. To thwart arbitary code execution issues, perhaps even memory map based. Also, I'd be interested in the privilege minimization to happen before execution, not during runtime. This would mean that every process would have its own virtual filesystem, and virtual api to use based on what was requested in the manifest. With such a design, it would be much easier to determine what applications are safe and what are not, since it's a fact that it's computationally impossible to predetermine for sure if an application will perform some action or not, without actually running it. There are no systems that do this. Also, in all modern systems the kernel space and security there is an arms race.
About worm propagation efficiencies: http://en.wikipedia.org/wiki/Metcalfe's_law
Regarding MS Paint, my mspaint gallery: http://muzzy.net/mspaint.html
Redarding root: Why is there a root user in the system at all if only badly designed applications would need it? Go and check how many suidroot apps you have: "find / -perm +4000", I'm sure you'll find plenty, and you probably won't even question why basic things like "su" and "passwd" are suidroot.
About source availability, having sources around doesn't make application better quality. It's an additional freedom for you, and independent to the right to modify applications you run. You don't need sources to do that. They are two independent things, although sources usually come with licenses to modify the application. The license doesn't make the application itself better, it just grants you freedom to modify it, which I'd prefer to be everyone's right without explicit permission.
About centralized databases (for suids, windows registry), database corruption isn't an issue. Filesystem can die, too, in a similar fashion. What's the difference? Having a centralized database, however, allows you to strictly regulate access to it. No incidents with mounting an old fs with suid option and then realizing there's a backdoor there, when the suid database is central and easily reviewable. About windows registry, the registry hives can be mounted anywhere in the registry namespace, so you could have any number of hives you wanted to. Each user has their own hive in their profile, too. Registry is just a standard namespace in windows, a concept which might seem strange from *nix perspective where there is only single filesystem namespace where everything gets mounted even if the contained data is semantically different. This is why windows devices are in a separate namespace and not in a filesystem in /dev
About java and bytecode languages: It's not true that they're slow. Java apps are only slow because the UI code is braindead and sluggish. The VM itself performs pretty well, and since bytecode gets JITed in runtime, they can dynamically recompile slow parts based on how they're used, etc. Things like this are difficult with traditional compilers, and definitely lacking the advantages of JITting VM without implementing a bytecode engine into compiled binaries (can you say bloat?). Theoretically, bytecode-compiled applications can perform faster than natively ran binaries.
About .NET and machine abstraction, I'm not oversimplifying things by saying that anything below it can be reimplemented without issues. Any issues would be performance differences in new implementation, as applications expect some things to perform in some way. This is because all the applications are compiled into intermediate language which shouldn't interface with the lowlevel system at all. The framework provides interoperability services for the transition phase of moving into .NET, but new applications can be written without depending on anything beneath .NET layer.
Regarding use of word "user" and not "administrator", it's indeed a little confusing, but in contexts I've used it I've meant desktop systems where the user is the system administrator. Typical user plays the administrator role happily without the require competence for it, and with results we all know about.
Then, about my definition of when a system works and when it doesn't. I define it to depend on the intent for which the system is used. If it cannot fulfill those requirements, then it doesn't work. It's not enough if it boots and all the apps run fine and it doesn't crash. If network cards fail to work (at all) on my ancient compaq when I compile the kernel for a traffic shaping setup, then the kernel definitely doesn't work. For some seemingly innocent kernel configurations, the damn thing just died during boot. A lot of the advanced network functionality in 2.6.x tree is known to have system crashing bugs. That's enough for me to declare that the kernel tree doesn't work, as I couldn't get the damn thing to work even after several days of kernel hacking and debugging. The only thing I managed to figure out that the NIC drivers themselves weren't likely at fault, but something strange in the iptables/packetscheduler implementation.
Regarding native applications and GNU, my point was that the applications that are written against one api do not necessarily perform very well on a system where the api is provided in form a translation layer. Also, my wget on windows has some strange issues that it doesn't have on linux, and although the glitches only happen rarely they're still annoying. Another thing about native applications is applications that have GUI. I recently installed bittorrent-4.0.0 with its crappy api. Now, I'm not that picky, but the damn thing breaks so many of windows GUI principles it hurts. Not to mention that if I minimize it, the UI processing dies COMPLETELY and I won't be able to even close it. It does this every time. I'd rather have a native GUI.
Phew, which of the skipped issues you want me to respond to, or do you have any comments about what I just said?
Calum:
--- Quote from: muzzy ---I think I'll start by addressing some of the more relevant things that are quick to answer. A lot of your response just seems to be a series of quick stabs, and I'm not sure if there's much to be gained by returning a similar series of quick stabs.
--- End quote ---
i don't agree that this is the case, if you mean my comments were pointless and were personal, however if by "quick stabs" you mean i don't take many words to refute what you say, then fine, i can agree with that, but i can't agree that there's no value in that. here we go...
--- Quote ---Regarding W9x-series, I don't want to think of it as a Windows because it differs so greatly from the NT-series.
--- End quote ---
tough shit, smartarse. if you are going to keep moaning about linux 1.2 and complaining about one application or another and blame all your problems on "linux" as a result, then i think you should have few qualms about people who think of "microsoft windows 98" as a release of microsoft windows, whether you want to think of it as one or not.
--- Quote ---The NT series has a serious and stable native kernel running it, with Win32 Executive Subsystem on top of it implementing the win32 user environment.
--- End quote ---
so, the windows 9x stuff is implemented using a virtual machine? or is it an emulation layer? this is the sort of backwards compatibility i don't think will give the best or most reliable performace. i am not a fan of wine either.
--- Quote ---On the other hand, w9x is just a pile of black magic and hacks.
--- End quote ---
no arguments there, except perhaps for the "magic" part.
--- Quote ---Although it does work to an extent, and is an usable system for some purposes, the system design is a joke. Everything in it has been designed for backwards compatibility, and compromises have been made in the very core design. I don't like it.
--- End quote ---
me neither, but you will find that most "windows haters" whom you write off as imbecilic slack jawed yokels (i am paraphrasing) will justifiably and reasonably base a lot of their experiences on this windows release. not surprising since 17 years of microsoft windows has been various versions of this crap. you can't just write off millions of people's experiences of microsoft's useless software by saying you don't like to think of it as windows.
--- Quote ---What I said of all operating systems being equally vulnerable by design, I was referring to the minimum privilege principle, and how badly it applies in every system when it comes to practice. In an ideal system, applications would manifest what they need to see from the filesystem, what libraries they need to access, what system apis they need to call. Then, anything not requested would be plain and simple blocked, out of sight completely. Modern filesystems supports ACLs to do privileges on user-based granularity, but I'd be more interested in a process-based granularity.
--- End quote ---
sounds fair, it's similar to how zonealarm blocks applications from accessing the internet (for example) while iptables blocks ports instead.
--- Quote ---To thwart arbitary code execution issues, perhaps even memory map based. Also, I'd be interested in the privilege minimization to happen before execution, not during runtime. This would mean that every process would have its own virtual filesystem, and virtual api to use based on what was requested in the manifest.
--- End quote ---
would this cost a lot of RAM? because on low RAM systems (if this were the case) i'd like to think i could have the choice of not doing something that might run slow or not at all as a result of this model.
--- Quote ---With such a design, it would be much easier to determine what applications are safe and what are not, since it's a fact that it's computationally impossible to predetermine for sure if an application will perform some action or not, without actually running it. There are no systems that do this. Also, in all modern systems the kernel space and security there is an arms race.
--- End quote ---
what do you think of the HURD concept incidentally? where everything is seperated form the kernel if it possibly can be? seems like they are having a hard time implementing it after all these years, i don't know much about it, but just wondered what you think of their general principle behind kernel design.
--- Quote ---About worm propagation efficiencies: http://en.wikipedia.org/wiki/Metcalfe's_law
--- End quote ---
i honestly do not see how this relates to vulnerabilities of one system versus another relating to worms. i understand metcalfe's law but not your habit of invoking it regularly.
--- Quote ---Regarding MS Paint, my mspaint gallery: http://muzzy.net/mspaint.html
--- End quote ---
this is not an answer. if i email you my CV, how does that prove the abiword is superior to wordperfect? files produced using ms paint, no matter how impressive, reflect the artist's creation rather than the technical capabilities of the program. i often think people's expectations have a lot to do with it too. i find ms paint easy to use, but highly simplistic and therefore unsuitable for a lot of things. i never learned how to use photoshop much, and so gimp is not a step down for me, like a lot of photoshop users seem to complain. surely photoshop is for photos and gimp is a more general app anyway. what i am saying is, this isn't the point.
--- Quote ---Redarding root: Why is there a root user in the system at all if only badly designed applications would need it?
--- End quote ---
the administrator uses it. i log in as root to create users, change passwords, edit config files (that are read only from the applications' point of view and so on. Surely you don't think a system can administrate itself? the assumption that it can has led to microsoft windows' appalling approach to security and i do not believe it is a sensible approach.
--- Quote ---Go and check how many suidroot apps you have: "find / -perm +4000", I'm sure you'll find plenty, and you probably won't even question why basic things like "su" and "passwd" are suidroot.
--- End quote ---
here are the results:
/usr/bin/chage
/usr/bin/gpasswd
/usr/bin/at
/usr/bin/sudo
/usr/bin/passwd
/usr/bin/crontab
/usr/bin/gpg
/usr/bin/gpg-agent
/usr/bin/gpg2
/usr/bin/lppasswd
/usr/bin/chfn
/usr/bin/chsh
/usr/bin/newgrp
/usr/bin/desktop-create-kmenu
/usr/libexec/openssh/ssh-keysign
/usr/sbin/ping6
/usr/sbin/traceroute6
/usr/sbin/traceroute
/usr/sbin/usernetctl
/usr/sbin/userisdnctl
/usr/sbin/userhelper
/usr/X11R6/bin/XFree86
/sbin/pam_timestamp_check
/sbin/pwdb_chkpwd
i wonder if this counts as plenty? i have no idea what this is all about actually, so i should probably read up on it. i should say though that this reflects red hat's defaults, and is still not blameable on "linux". blame red hat if you must, but unless you can explain to me why things cannot be configured securely under a linux system, your attempts to snipe at the defaults of a specific linux based system are unlikely to move me in any way.
--- Quote ---About source availability, having sources around doesn't make application better quality. It's an additional freedom for you, and independent to the right to modify applications you run. You don't need sources to do that. They are two independent things, although sources usually come with licenses to modify the application. The license doesn't make the application itself better, it just grants you freedom to modify it, which I'd prefer to be everyone's right without explicit permission.
--- End quote ---
me too, and i agree with you here, as far as it goes, but you fail to even admit that such a thing as people checking each other's work for errors is beneficial. people checking each other's work cannot create more errors, it can only eliminate existing ones, and with thousands upon thousands of people doing this, it stands to reason that this is more effective than dozens and dozens (in the case of a company, such as microsoft). if you choose to discuss a subject, why not address the issue actually at question, instead of just repeating yourself?
--- Quote ---About centralized databases (for suids, windows registry), database corruption isn't an issue. Filesystem can die, too, in a similar fashion. What's the difference? Having a centralized database, however, allows you to strictly regulate access to it. No incidents with mounting an old fs with suid option and then realizing there's a backdoor there, when the suid database is central and easily reviewable. About windows registry, the registry hives can be mounted anywhere in the registry namespace, so you could have any number of hives you wanted to. Each user has their own hive in their profile, too. Registry is just a standard namespace in windows, a concept which might seem strange from *nix perspective where there is only single filesystem namespace where everything gets mounted even if the contained data is semantically different. This is why windows devices are in a separate namespace and not in a filesystem in /dev
--- End quote ---
up until windows 98 they were also accessible using filenames, has this never been the case under NT, out of interest? personally i think there are a lot of benefits to the "everything is a file" idea. database design is not my forte, so if there is something i should be criticising in your reply, i will have to leave it to somebody else to do so.
--- Quote ---About java and bytecode languages: It's not true that they're slow. Java apps are only slow because the UI code is braindead and sluggish. The VM itself performs pretty well, and since bytecode gets JITed in runtime, they can dynamically recompile slow parts based on how they're used, etc. Things like this are difficult with traditional compilers, and definitely lacking the advantages of JITting VM without implementing a bytecode engine into compiled binaries (can you say bloat?). Theoretically, bytecode-compiled applications can perform faster than natively ran binaries.
--- End quote ---
i have heard this, but this is kind off topic, since we were discussing operating systems, and how one is allegedly better than another. whether compiling is better than interpreting is another discussion entirely, and of course is dependent on circumstances.
--- Quote ---About .NET and machine abstraction, I'm not oversimplifying things by saying that anything below it can be reimplemented without issues. Any issues would be performance differences in new implementation, as applications expect some things to perform in some way. This is because all the applications are compiled into intermediate language which shouldn't interface with the lowlevel system at all. The framework provides interoperability services for the transition phase of moving into .NET, but new applications can be written without depending on anything beneath .NET layer.
--- End quote ---
this is kind of like an interpreted language, or virtual machine, emulation layer, whatever. again, i question whether this sort of thing is always appropriate, and suspect that there are performance issues related to it. I think that you should always try and make stuff work on the most minimal hardware possible. i don't go for the idea of testing stuff on the latest machinery and then just saying those are the minimum requirements. not everybody can buy new kit all the time. I think i am digressing though.
--- Quote ---Regarding use of word "user" and not "administrator", it's indeed a little confusing, but in contexts I've used it I've meant desktop systems where the user is the system administrator. Typical user plays the administrator role happily without the require competence for it, and with results we all know about.
--- End quote ---
yes, and that's my point. this is one reason why having a "root" user separate from user accounts is a good idea, because the person behind the keyboard knows which hat s/he is wearing at any one time, and if they don't they can just do a quick whoami. i was horrified when i heard that some linuces were trying to be like windows by having users log on as root - this is an appalling model, but sadly one that microsoft is happy to encourage amongst their "users".
--- Quote ---Then, about my definition of when a system works and when it doesn't. I define it to depend on the intent for which the system is used. If it cannot fulfill those requirements, then it doesn't work. It's not enough if it boots and all the apps run fine and it doesn't crash. If network cards fail to work (at all) on my ancient compaq when I compile the kernel for a traffic shaping setup, then the kernel definitely doesn't work.
--- End quote ---
i think i have to take a leaf out of your book and blame that on you. if you fail to compile the kernel in a way that is capable of supporting your hardware, then whose fault is it? if you were not able to recompile your own kernel, then you could blame the kernel coordinators (like with windows, in fact, you can't recompile their kernel, so microsoft are to blame for unsupported hardware if the problem is at kernel level, yes?), but if you do it yourself, then you know where the buck stops. This is not the same as the winmodem problem, incidentally, where actual hardware gets artificially emulated in software, but the software is only available for mswindows. In past years a lot of people blamed linux for not being able to support their modems, when the hardware vendors were responsible for the problem just mentioned, by churning out kit with bits missing and hardware to emulate it (of course, this has an associated performance cost, so is not as good as the real thing, even under mswindows). A similar problem still happens with some hardware, but less people are using dialup modems now i suppose.
--- Quote ---For some seemingly innocent kernel configurations, the damn thing just died during boot. A lot of the advanced network functionality in 2.6.x tree is known to have system crashing bugs. That's enough for me to declare that the kernel tree doesn't work, as I couldn't get the damn thing to work even after several days of kernel hacking and debugging.
--- End quote ---
do microsoft release their testing versions of software? you can consider that since the whole world is the development team for the linux kernel, that you are dealing with a "testing" version. how is it appropriate to compare testing versions with finished releases?
--- Quote ---The only thing I managed to figure out that the NIC drivers themselves weren't likely at fault, but something strange in the iptables/packetscheduler implementation.
--- End quote ---
ok, i haven't used the 2.6 kernels yet, many 2.4 based systems work fine on my compaq m300.
--- Quote ---Regarding native applications and GNU, my point was that the applications that are written against one api do not necessarily perform very well on a system where the api is provided in form a translation layer.
--- End quote ---
true, and it's what i was saying above in my replies to you here.
--- Quote ---Also, my wget on windows has some strange issues that it doesn't have on linux, and although the glitches only happen rarely they're still annoying.
--- End quote ---
ok, my solution to this is don't use it in windows, use it in a real GNU system like linux, your mileage may vary though, since you seem to consider that things should run fine in windows. i am sure you are right in your criticisms about crossover office incidentally, and i am not keen on this sort of thing either, this is essentially the same thing you're complaining about here, i suppose.
--- Quote ---Another thing about native applications is applications that have GUI. I recently installed bittorrent-4.0.0 with its crappy api. Now, I'm not that picky, but the damn thing breaks so many of windows GUI principles it hurts. Not to mention that if I minimize it, the UI processing dies COMPLETELY and I won't be able to even close it. It does this every time. I'd rather have a native GUI.
--- End quote ---
that's not much use. still, it has nothing to do with linux does it? is it from the GNU software people? i suspect that's a third party app, just like any crappy third party app (there are thousands) with its own bugs. the fact of it being open source, or whatever your main point is doesn't really come into it. one of my favourite applications in windows is CDex - it looks consistent with the windows UI, is fast, efficient, easily configurable and completely stable. It is also totally open source, and incidentally, it's written for mswindows. what i am saying here is that criticising the open source model based on some crappy software is ludicrous since there's no connection between the two just because some crappy software happens to be open source (tons more shit software is shareware or postcardware for example)
--- Quote ---Phew, which of the skipped issues you want me to respond to, or do you have any comments about what I just said?
--- End quote ---
there are my comments, if you skipped them, then you probably had your reasons, what we have said is still there for other contributors to read and comment on, so maybe somebody else will ask a question.
muzzy:
By "Quick Stabs" I meant your way of answering my points by merely addressing a way I express it. I.e. tangling to words, twisting them, and so on. I have a view here that I'm trying to express, and I'd rather like to discuss about it itself than the exact words I use to express it.
Regarding my view of not considering w9x series as a Windows operating system, it's because the two series are a completely different operating systems with completely different design and approach at doing things. NT is what Windows should've been from the very beginning.
--- Quote from: Calum ---so, the windows 9x stuff is implemented using a virtual machine? or is it an emulation layer?
--- End quote ---
Neither, actually. The win32 executive subsystem is practically just a process. The applications you run just communicate with it through a client/server type of relationship. The win32 api is implemented as a bunch of libraries that applications link against, and these libraries implement the message passing between the win32 executive. I think you've seen the CSRSS.EXE in your process manager and been wondering what it is, it's the win32 executive subsystem server process. The graphics and gui stuff however are implemented as a separate kernelmode subsystem for higher performance, so that no context switching is needed for message passing. Nothing is "emulated".
The Win16 Executive Subsystem server is more of a virtual machine, even though it runs the binaries natively.
--- Quote from: Calum ---not surprising since 17 years of microsoft windows has been various versions of this crap. you can't just write off millions of people's experiences of microsoft's useless software by saying you don't like to think of it as windows.
--- End quote ---
17 years? Has it really been that long? All of the win3.x, win9x, and NT have been quite radically different systems. I think you're right about my use of the word, I should just call my OS of preference "Windows NT", except that people would think I mean some ancient version. I've preferred to use "Windows" to only mean the current design, which btw has been a separate branch of an OS since pre-3.x times. If only microsoft didn't call them all just "Windows", this naming practice makes me think they're referring to the user environment and not the OS...
--- Quote from: Calum ---what do you think of the HURD concept incidentally? where everything is seperated form the kernel if it possibly can be? seems like they are having a hard time implementing it after all these years, i don't know much about it, but just wondered what you think of their general principle behind kernel design.
--- End quote ---
I haven't really looked into HURD, but since it's a pure microkernel design, I'm expecting they won't get a high performance desktop running anytime soon. The message passing overhead of a pure microkernel design is just too heavy IMO. Windows NT bypasses these issue by having a slightly altered microkernel design. If HURD can design around context switching and scheduling overheads which will come from having a microkernel design, it could turn out to be a really good OS. It's a bit early to say, and I haven't really had an in-depth look into it.
--- Quote from: Calum ---i honestly do not see how this relates to vulnerabilities of one system versus another relating to worms. i understand metcalfe's law but not your habit of invoking it regularly.
--- End quote ---
There have been countless holes in linux which have been as severe as the windows holes. There has been enough time for people to write worms too. Typically, they haven't had such a big impact as the windows worms do. This is because of numbers.
About MS Paint, yeah it isn't very feature filled, however my point was that it's perfectly suitable for drawing and should not be considered as a joke. It's a serious application that can do a lot of things, just like gimp can do a lot of things. However, mspaint isn't a gimp replacement and gimp isn't a photoshop replacement.
And regarding suidroots, there just isn't a way around all of it. Applications are set suidroot because they need to do something that the user cannot do. Typically applications drop their root privileges after they're done using it, but there have been countless of vulnerabilities that have occurred before this happens. One way to solve the problem in *nix environment is to create a separate user for the process. This works fine with services, so they can be chrooted for filesystem scoping and so on. However it doesn't work at all for those said applications, because users cannot be given fine grained privileges without really funky patches. Pretty much all of the current linux distros depend on root user to exist, and suidroot applications ran as its privileges. There are some interesting process based security patches which takes root privileges away from the user and give them to specific binaries, but such systems aren't used by any common distros.
--- Quote from: Calum ---but you fail to even admit that such a thing as people checking each other's work for errors is beneficial.
--- End quote ---
You are making the assumption that sources are necessary for this, yet quality assurance testing is regularly done without sources.
--- Quote from: Calum ---up until windows 98 they were also accessible using filenames, has this never been the case under NT, out of interest? personally i think there are a lot of benefits to the "everything is a file" idea.
--- End quote ---
The devices are still available under the unified namespace, the CreateFile() api supports syntax like \\.\FOO to access objects under the object namespace \??\ directory. The command prompt still looks up object names from the same directory as well, and this is where things like C: D: E: and other symbolic links live, and point to the real physical devices. The idea of the object namespace is to have systemwide (and per-session) named objects for things like events, processes, threads, desktops, etc. Named pipes are still implemented as a filesystem and are all files, even though they're not part of either object namespace nor the filesystem namespace. There are various similar unofficial namespaces, and they are accessible through device objects in the object namespace. In conclusion, I don't think there are any benefits over the "everything is a file" over the NT design. Any issues I can think of can be blamed on the command prompt implementation, which doesn't even support the full NT filesystem namespace (alternate stream syntax not properly supported, for example)
--- Quote from: Calum ---i think i have to take a leaf out of your book and blame that on you. if you fail to compile the kernel in a way that is capable of supporting your hardware, then whose fault is it?
--- End quote ---
The thing is, it supported my hardware. Trying to compile it with packet scheduling (hardware independent) stuff made it trash. The kernel is supposed to be compiled with various different settings, and every configuration is supposed to work or at least give a sensible errors what's going on. Some of the configuration I tried wouldn't even go as far as starting init, they'd either mysteriously reboot (bug), or kernel panic due to something unexpected (bug). I know perfectly well how to compile stuff, and to my best knowledge my configurations were totally OK. The kernel just didn't work. If you had the patience to go through some of the changelogs, you'd find that the 2.6.x series is totally fucked. In 2.6.9 you could crash the kernel by merely opening enough connections, a bug which took down my shellbox once. Even with "normal" configurations the damn thing is so bugridden it hurts, and I figured that there were some things that almost always made the kernel die a horrible death when turned on (ingress filtering, for example)
2.4.x late kernels work fine, but lack stuff for which I would've wanted to use 2.6.x. So, I made the mistake of assuming a kernel tree with a "stable" version numbering scheme would've actually had stable kernels.
--- Quote from: Calum ---do microsoft release their testing versions of software? you can consider that since the whole world is the development team for the linux kernel, that you are dealing with a "testing" version. how is it appropriate to compare testing versions with finished releases?
--- End quote ---
This was a really REALLY low now. Basically, you are saying that linux shouldn't be ever expected to work? Yeah, that's about right. Now, think again what you said, think carefully. Do you really want to ask me this question?
Orethrius:
To misquote Albert Einstein:
NT ist VMS to ze SECOND POWER, you TWIT!
That silliness out of the way, I'd just like to make a short observation. You seem to have a bad habit of blaming bad applications in Windows on the program sources, then blaming the same under Linux (not even a specific distro, mind you, the kernel AS A WHOLE) on the kernel compilers. I'll consider having a debate with you over that particular fallacy once you return from Ganymede and have your spacesuit disinfected.
EDIT:
Wikipedia
--- Quote ---Metcalfe's law states that the value of a communication system grows as approximately the square of the number of users of the system (N
--- End quote ---
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version