I know WMD and others hashed a lot of this but this guy has some facts wrong.
Sure one CAN create self-installing packages, but the creation of such packages is MUCH HARDER for GNU/Linux than for other systems. The developers need to constantly add new ABI tests to their installers, in order to make the software compatible with the distros they support.
Actually, no. On Windows, one must download a install maker (which usually costs money) and use it to create the installer. Sounds right? Yeah. However on Linux, it's the same thing, in fact the standardized installer maker for Linux (defacto anyway) is Loki's installer tool which I have seen more instances of in the last three days than ever before (downloading games and proprietary software, it abounds). The tool is here:
http://www.lokigames.com/development/setup.php3
Ever heard of glibc breaking backwards compatibility? Yeah, that happens quite often (about every 2 years or so), and this makes binary packaging really hard: only way to be 100% sure the software can be run in the target distro, is to compile it from source, but this won't work for proprietary software.
Yeah there were several glibc compat breaks in the past, and frankly I agree that glibc needs to take more initiative to keep API compatibility. And yes, this *can* make binary redistribution difficult. However, you must remember that the people behind Glibc and the people behind Linux and even the people behind KDE are all *different* *people*. And up until the last few years they weren't in the limelight at all, which gave them little incentive to make users every dream and wish come true. In fact there are still a lot of developers who remember such a time and deliberately try not to given to a user's every whim. Given time and new leadership, things begin to change, for instance only introducing binary incompatibility with new major versions so that old major versions can be kept around.
There is the second point of Mono and what it will mean with further adoption. Binaries produced from Mono (like any other CIL-targetting compiler) will work on Windows, will work on Linux, will work on Mac OS X, Hell, if you can get a runtime working on BeOS you could run it. Of course this cross-platformness works with all different forms of Linux as well. I do believe it will be the saving grace for this problem if Glibc and other projects continue introducing problems.
By the way you forgot Linux with it's refusal to introduce a stable ABI for drivers. Linus claims he will never change his mind, but frankly, who cares if he never changes his mind because the opportunity is open for someone else to fix it with some magical solution. Stranger things have happened to Linux in the past, like the introduction of the Composite and Damage extensions. Everyone involved in X11 development (including me, slightly) expected alpha channels on Linux to appear as a single extension and have the semantics of blending windows together be built into the server, but because of the vast difficulty of doing that, an even better solution was devised that made an infinite amount of effects possible and implementable by any capable dev who tries.
Linux has enough users to attract developers to make software for Linux ... BUT, most developers become frustrated with the inconsistent ABI and lack of standards. Without a stable and backwards compatbile runtime environment, ISV's are forced to suck the cock of the people who happen to design the distros ...
Given your personal frustrations with working with inconsistent APIs and ABIs, I tell you that people are working on such matters, in fact, I am (komodoware.com).
If a particular device won't work in the way user would want it to work, the fault is not only HW manufacturers: the target distro might have fucked up some plug&pray magic in the userland level, and nothing happens when user plugs in a device. Simple as that.
Just as Windows may have fucked something up. Just the othe rday I tried to get my USB CD burner working (has worked fine in Windows forever) but Windows failed to notify me about me pluggin it in and in fact I had to go to the device manager (something few home users know about) and try to figure out why Windows wasnt using the correct driver for it. I went to the manufacturers site and got their drivers, but still no avail! Windows has just the same problems if not more. Just because you have had success on Windows and failure on Linux doesnt mean thats everybody's opinion. *Not* simple as that.
And because GNU/Linuxes have NO standards for plug&pray, then who can be blamed? The HW manufacturer, even if the driver would work? The distro developers, who make their own quick-n-dirty scripts to invoke some retard-proof plug&play magic?
Linux doesnt have standards for plug and play because neither does Windows. All plug and play is, is a marketing term describing a piece of software which detects the hardware and uses the proper driver. So who can be blamed when a device doesn't work? Blame your distribution! Duh! The people who put together the combination of individual tools into an OS are the people who have failed to get your USB mouse working or whatever.
In GNU/Linux no one can't be blamed for the things that are not standardized. And since nothing is standardized, the only thing we can bitch about is the reatarded system design most distros have these days.
And that ain't really no-ones fault, it's just the way how things work with OpenSource /,,/
Again, you are going on false facts here. You should hold your distro responsible for any problems using it. Thsi is why distros so commonly have someone working upstream with OSS projects, so that they can fix the problems users complain about. And please do not generalize "Open Source" to what you see on your Linux distribution. "Open Source" is 100% seperate from Linux. That's like jumping up and yelling at IBM (makers of OS/2 at one point remember) for Windows not working!! Remember Mac OS X is based on Darwin, which is open source.
It's better to do it all yourself than let some other guy decide how things should work. Since money can't be used to enforce standards, we are the kings of nothing
You're right, but you just made all your previous points moot. If you want to do it yourself, it won't be easy. And I dont get the kings of nothing thing.
In Linux there are NO standard installer ABI's. That is why I am trying to make this sandboxed binary runtime platform ... but nobody seems to care, since they are occupied on wanking on their favorite distros, and their Godly package managers (AAAAAH APT IS SO SEXY IM GONNA CUM LIKE A HORSE!)
Since when does Windows have a "standard installer ABI" unless you mean the registry which maps installed applications to uninstallers. That in itself is merely in it's beginning stages on Linux. All it takes is the right toolkit for developers to use which would interact with the package manager which was available on the machine and do the right stuff. There are several solutions in library and program form which attempt this. By moving the process of dealing with multiple package managers into the toolkit, the application is free to do what it was designed for. And if Mono does in fact keep growing in usage a copy of the toolkit can be sent with the binary distribution of your software so that it really will work everywhere. Such libs are small of course, what with not including the other stuff that comes with packages such as documentation, source code, and the overhead of the package manager being used. Once Linux distros get used to this happening with proprietary software, they'll create tools which remove unneeded libraries and files from the individual install directories (remember that toolkit?).
There are package managers in Linux, but they are always integrated tightly with the distribution spesific system layout (and with the runtime convetions of the system), and so they are not suitable for universally installable packages ... heck, there is NO commonly accepted universal runtime standard, and that is sad.
The package manager and filesystem layout are perhaps the only large inhibitors of such packages: the ABI of most projects *does* stay pretty stable, and this is just reiterating my point about using Mono once again (there is no such thing as ABIs, only APIs--you can add members anywhere and dependent code will still use the correct struct/class layout).
Funny you should mention filesystem layout when the toolkit I've been so fondly speaking of has just this feature. The ability to define the layout of the filesystem from a system global layout file in a place that doesn't require you to bend over to a certain layout in order to support it. Right now (its still pre-release) it reads /.System/Layout.xml. On the system it was built for (my Komodo distribution) it maps stuff to /System, /System/Software, /Software, /System/Temp etc but on a traditional Linux system a Layout.xml could specify to use paths like /usr/bin/ and /etc. Again, the app doesn't really care.
LSB would be a very good (reference) runtime ABI for programs, but those fucking OSS-software developers won't bother releasing their software compiled with LSB conventions. And that's their fault ...
You act as though we have a big OSS-dev only forum where we all communicate and coordinate. The open source community is not a single body, so it takes time for things like that to be adopted. You wait half a year and I'm sure there will be a lot more support. Also, 90% of the code which comprises GNU/Linux distributions is built with autotools, which generates the "configure" files that choose how the software should be compiled. If autotools put some work into building with LSB conventions by default, the code doesnt even have to change. Rarely do C/C++ based projects write their own makefiles.
So this leaves end-users with two options:
1) To either stick with the distro-provided package management and package list, and never even try anything new
or
2) become a self-learned GNU/Linux "guru", who can compile and install
software from source-archives.
Or, try new distributions which incorporate these new technologies which better Linux and make it easier to do such things. In fact, yeah you could stick with your big distros which are established because they are all becoming LSB compliant anyway. SuSE *has been* LSB compliant for quite awhile, and googling "LSB linux distribution" gets many hits about distributions aiming for LSB, consortiums declaring LSB dedication, etc.
Oh yeah, and those external RPM/APT repositories WON'T COUNT. They are mostly monolithic collections, and so the dependencies and runtime conventions go with the repository. It is very common to fuck up one's package management by mixing different repositories with alike contents ...
Yes you are right, but I know for a fact that the "runtime conventions" are not as different as you think. My distro can install software from RPMs, Slackware PKGs, and DEBs, without many problems. Of course, it's not perfect: Fedora's RPMs don't work with the RPM unpacker we have but all that can be improved as development continues.
Actually we have no Linux OS. We got multiple OS's, who just use the Linux kernel and GNU-userland. Everything else is decided by the distro developers, who create their own standards and solutions. So basically, we got no stable runtime, no stable ABI, no nothing.
[/QUOTE]
Yes but like I asserted earlier, these "multiple Linuces" are not that different from each other, and even though a lot of Linux software is still pretty stupid in relation to where they install stuff, it's all a matter of the efforts for cross-Linux compatibility which *are* active and which *are* producing things. Even smaller developers such as myself think about these problems and apply more force to fixing them. You put a ton of OSS devs in front of one of you people and we soak it all in. We think deeper about where the merit is and try to fix it. The beauty is, any capable free Linux developer who came across these posts would only be emplored to find solutions.
That's my conclusion. Heartbreaking baby.