I think I'll start by addressing some of the more relevant things that are quick to answer. A lot of your response just seems to be a series of quick stabs, and I'm not sure if there's much to be gained by returning a similar series of quick stabs.
i don't agree that this is the case, if you mean my comments were pointless and were personal, however if by "quick stabs" you mean i don't take many words to refute what you say, then fine, i can agree with that, but i can't agree that there's no value in that. here we go...
Regarding W9x-series, I don't want to think of it as a Windows because it differs so greatly from the NT-series.
tough shit, smartarse. if you are going to keep moaning about linux 1.2 and complaining about one application or another and blame all your problems on "linux" as a result, then i think you should have few qualms about people who think of "microsoft windows 98" as a release of microsoft windows, whether
you want to think of it as one or not.
The NT series has a serious and stable native kernel running it, with Win32 Executive Subsystem on top of it implementing the win32 user environment.
so, the windows 9x stuff is implemented using a virtual machine? or is it an emulation layer? this is the sort of backwards compatibility i don't think will give the best or most reliable performace. i am not a fan of wine either.
On the other hand, w9x is just a pile of black magic and hacks.
no arguments there, except perhaps for the "magic" part.
Although it does work to an extent, and is an usable system for some purposes, the system design is a joke. Everything in it has been designed for backwards compatibility, and compromises have been made in the very core design. I don't like it.
me neither, but you will find that most "windows haters" whom you write off as imbecilic slack jawed yokels (i am paraphrasing) will justifiably and reasonably base a lot of their experiences on this windows release. not surprising since 17 years of microsoft windows has been various versions of this crap. you can't just write off millions of people's experiences of microsoft's useless software by saying you don't like to think of it as windows.
What I said of all operating systems being equally vulnerable by design, I was referring to the minimum privilege principle, and how badly it applies in every system when it comes to practice. In an ideal system, applications would manifest what they need to see from the filesystem, what libraries they need to access, what system apis they need to call. Then, anything not requested would be plain and simple blocked, out of sight completely. Modern filesystems supports ACLs to do privileges on user-based granularity, but I'd be more interested in a process-based granularity.
sounds fair, it's similar to how zonealarm blocks applications from accessing the internet (for example) while iptables blocks ports instead.
To thwart arbitary code execution issues, perhaps even memory map based. Also, I'd be interested in the privilege minimization to happen before execution, not during runtime. This would mean that every process would have its own virtual filesystem, and virtual api to use based on what was requested in the manifest.
would this cost a lot of RAM? because on low RAM systems (if this were the case) i'd like to think i could have the choice of not doing something that might run slow or not at all as a result of this model.
With such a design, it would be much easier to determine what applications are safe and what are not, since it's a fact that it's computationally impossible to predetermine for sure if an application will perform some action or not, without actually running it. There are no systems that do this. Also, in all modern systems the kernel space and security there is an arms race.
what do you think of the HURD concept incidentally? where everything is seperated form the kernel if it possibly can be? seems like they are having a hard time implementing it after all these years, i don't know much about it, but just wondered what you think of their general principle behind kernel design.
About worm propagation efficiencies: http://en.wikipedia.org/wiki/Metcalfe's_law
i honestly do not see how this relates to vulnerabilities of one system versus another relating to worms. i understand metcalfe's law but not your habit of invoking it regularly.
Regarding MS Paint, my mspaint gallery: http://muzzy.net/mspaint.html
this is not an answer. if i email you my CV, how does that prove the abiword is superior to wordperfect? files produced using ms paint, no matter how impressive, reflect the artist's creation rather than the technical capabilities of the program. i often think people's expectations have a lot to do with it too. i find ms paint easy to use, but highly simplistic and therefore unsuitable for a lot of things. i never learned how to use photoshop much, and so gimp is not a step down for me, like a lot of photoshop users seem to complain. surely photoshop is for photos and gimp is a more general app anyway. what i am saying is, this isn't the point.
Redarding root: Why is there a root user in the system at all if only badly designed applications would need it?
the administrator uses it. i log in as root to create users, change passwords, edit config files (that are read only from the applications' point of view and so on. Surely you don't think a system can administrate itself? the assumption that it can has led to microsoft windows' appalling approach to security and i do not believe it is a sensible approach.
Go and check how many suidroot apps you have: "find / -perm +4000", I'm sure you'll find plenty, and you probably won't even question why basic things like "su" and "passwd" are suidroot.
here are the results:
/usr/bin/chage
/usr/bin/gpasswd
/usr/bin/at
/usr/bin/sudo
/usr/bin/passwd
/usr/bin/crontab
/usr/bin/gpg
/usr/bin/gpg-agent
/usr/bin/gpg2
/usr/bin/lppasswd
/usr/bin/chfn
/usr/bin/chsh
/usr/bin/newgrp
/usr/bin/desktop-create-kmenu
/usr/libexec/openssh/ssh-keysign
/usr/sbin/ping6
/usr/sbin/traceroute6
/usr/sbin/traceroute
/usr/sbin/usernetctl
/usr/sbin/userisdnctl
/usr/sbin/userhelper
/usr/X11R6/bin/XFree86
/sbin/pam_timestamp_check
/sbin/pwdb_chkpwd
i wonder if this counts as plenty? i have no idea what this is all about actually, so i should probably read up on it. i should say though that this reflects red hat's defaults, and is still not blameable on "linux". blame red hat if you must, but unless you can explain to me why things cannot be configured securely under a linux system, your attempts to snipe at the defaults of a specific linux based system are unlikely to move me in any way.
About source availability, having sources around doesn't make application better quality. It's an additional freedom for you, and independent to the right to modify applications you run. You don't need sources to do that. They are two independent things, although sources usually come with licenses to modify the application. The license doesn't make the application itself better, it just grants you freedom to modify it, which I'd prefer to be everyone's right without explicit permission.
me too, and i agree with you here, as far as it goes, but you fail to even admit that such a thing as people checking each other's work for errors is beneficial. people checking each other's work cannot create more errors, it can only eliminate existing ones, and with thousands upon thousands of people doing this, it stands to reason that this is more effective than dozens and dozens (in the case of a company, such as microsoft). if you choose to discuss a subject, why not address the issue actually at question, instead of just repeating yourself?
About centralized databases (for suids, windows registry), database corruption isn't an issue. Filesystem can die, too, in a similar fashion. What's the difference? Having a centralized database, however, allows you to strictly regulate access to it. No incidents with mounting an old fs with suid option and then realizing there's a backdoor there, when the suid database is central and easily reviewable. About windows registry, the registry hives can be mounted anywhere in the registry namespace, so you could have any number of hives you wanted to. Each user has their own hive in their profile, too. Registry is just a standard namespace in windows, a concept which might seem strange from *nix perspective where there is only single filesystem namespace where everything gets mounted even if the contained data is semantically different. This is why windows devices are in a separate namespace and not in a filesystem in /dev
up until windows 98 they were also accessible using filenames, has this never been the case under NT, out of interest? personally i think there are a lot of benefits to the "everything is a file" idea. database design is not my forte, so if there is something i should be criticising in your reply, i will have to leave it to somebody else to do so.
About java and bytecode languages: It's not true that they're slow. Java apps are only slow because the UI code is braindead and sluggish. The VM itself performs pretty well, and since bytecode gets JITed in runtime, they can dynamically recompile slow parts based on how they're used, etc. Things like this are difficult with traditional compilers, and definitely lacking the advantages of JITting VM without implementing a bytecode engine into compiled binaries (can you say bloat?). Theoretically, bytecode-compiled applications can perform faster than natively ran binaries.
i have heard this, but this is kind off topic, since we were discussing operating systems, and how one is allegedly better than another. whether compiling is better than interpreting is another discussion entirely, and of course is dependent on circumstances.
About .NET and machine abstraction, I'm not oversimplifying things by saying that anything below it can be reimplemented without issues. Any issues would be performance differences in new implementation, as applications expect some things to perform in some way. This is because all the applications are compiled into intermediate language which shouldn't interface with the lowlevel system at all. The framework provides interoperability services for the transition phase of moving into .NET, but new applications can be written without depending on anything beneath .NET layer.
this is kind of like an interpreted language, or virtual machine, emulation layer, whatever. again, i question whether this sort of thing is always appropriate, and suspect that there are performance issues related to it. I think that you should always try and make stuff work on the most minimal hardware possible. i don't go for the idea of testing stuff on the latest machinery and then just saying those are the minimum requirements. not everybody can buy new kit all the time. I think i am digressing though.
Regarding use of word "user" and not "administrator", it's indeed a little confusing, but in contexts I've used it I've meant desktop systems where the user is the system administrator. Typical user plays the administrator role happily without the require competence for it, and with results we all know about.
yes, and that's my point. this is one reason why having a "root" user separate from user accounts is a good idea, because the person behind the keyboard knows which hat s/he is wearing at any one time, and if they don't they can just do a quick whoami. i was horrified when i heard that some linuces were trying to be like windows by having users log on as root - this is an appalling model, but sadly one that microsoft is happy to encourage amongst their "users".
Then, about my definition of when a system works and when it doesn't. I define it to depend on the intent for which the system is used. If it cannot fulfill those requirements, then it doesn't work. It's not enough if it boots and all the apps run fine and it doesn't crash. If network cards fail to work (at all) on my ancient compaq when I compile the kernel for a traffic shaping setup, then the kernel definitely doesn't work.
i think i have to take a leaf out of your book and blame that on you. if you fail to compile the kernel in a way that is capable of supporting your hardware, then whose fault is it? if you were not able to recompile your own kernel, then you could blame the kernel coordinators (like with windows, in fact, you can't recompile their kernel, so microsoft are to blame for unsupported hardware if the problem is at kernel level, yes?), but if you do it yourself, then you know where the buck stops. This is not the same as the winmodem problem, incidentally, where actual hardware gets artificially emulated in software, but the software is only available for mswindows. In past years a lot of people blamed linux for not being able to support their modems, when the hardware vendors were responsible for the problem just mentioned, by churning out kit with bits missing and hardware to emulate it (of course, this has an associated performance cost, so is not as good as the real thing, even under mswindows). A similar problem still happens with some hardware, but less people are using dialup modems now i suppose.
For some seemingly innocent kernel configurations, the damn thing just died during boot. A lot of the advanced network functionality in 2.6.x tree is known to have system crashing bugs. That's enough for me to declare that the kernel tree doesn't work, as I couldn't get the damn thing to work even after several days of kernel hacking and debugging.
do microsoft release their testing versions of software? you can consider that since the whole world is the development team for the linux kernel, that you are dealing with a "testing" version. how is it appropriate to compare testing versions with finished releases?
The only thing I managed to figure out that the NIC drivers themselves weren't likely at fault, but something strange in the iptables/packetscheduler implementation.
ok, i haven't used the 2.6 kernels yet, many 2.4 based systems work fine on my compaq m300.
Regarding native applications and GNU, my point was that the applications that are written against one api do not necessarily perform very well on a system where the api is provided in form a translation layer.
true, and it's what i was saying above in my replies to you here.
Also, my wget on windows has some strange issues that it doesn't have on linux, and although the glitches only happen rarely they're still annoying.
ok, my solution to this is don't use it in windows, use it in a real GNU system like linux, your mileage may vary though, since you seem to consider that things should run fine in windows. i am sure you are right in your criticisms about crossover office incidentally, and i am not keen on this sort of thing either, this is essentially the same thing you're complaining about here, i suppose.
Another thing about native applications is applications that have GUI. I recently installed bittorrent-4.0.0 with its crappy api. Now, I'm not that picky, but the damn thing breaks so many of windows GUI principles it hurts. Not to mention that if I minimize it, the UI processing dies COMPLETELY and I won't be able to even close it. It does this every time. I'd rather have a native GUI.
that's not much use. still, it has nothing to do with linux does it? is it from the GNU software people? i suspect that's a third party app, just like any crappy third party app (there are thousands) with its own bugs. the fact of it being open source, or whatever your main point is doesn't really come into it. one of my favourite applications in windows is CDex - it looks consistent with the windows UI, is fast, efficient, easily configurable and completely stable. It is also totally open source, and incidentally, it's written for mswindows. what i am saying here is that criticising the open source model based on some crappy software is ludicrous since there's no connection between the two just because some crappy software happens to be open source (tons more shit software is shareware or postcardware for example)
Phew, which of the skipped issues you want me to respond to, or do you have any comments about what I just said?
there are my comments, if you skipped them, then you probably had your reasons, what we have said is still there for other contributors to read and comment on, so maybe somebody else will ask a question.