Stop Microsoft
Operating Systems => Linux and UNIX => Topic started by: piratePenguin on 22 July 2005, 18:15
-
http://news.yahoo.com/news?tmpl=story&u=/ibd/20050720/bs_ibd_ibd/2005720tech01 (http://news.yahoo.com/news?tmpl=story&u=/ibd/20050720/bs_ibd_ibd/2005720tech01)
yey!
-
Hey look, an article with no comments or even a quote from the article by the OP. Quality thread if ever I saw one.
-
Hey look, an article with no comments or even a quote from the article by the OP. Quality thread if ever I saw one.
yep.
Well, I could just quote the whole article and not bother with the link.
Thread bolloxed even more.
You want me to say something?
We're on our wayyy!!
now, why don't you comment on the article xyle_one?
-
Because I didnt fucking post it. If you create a thread and cant even take the time to make it worthwhile, why should I even bother?
This thread fucking sucks.
-
Because I didnt fucking post it. If you create a thread and cant even take the time to make it worthwhile, why should I even bother?
Because I did comment on it.
-
You posted a one word reply to an article that you couldnt even be assed to quote in the OP. Its a worthless thread, no content, and no direction. Were you hoping to start a discussion? If not, why even bother posting on a message board?
-
Its a worthless thread, no content, and no direction. Were you hoping to start a discussion? If not, why even bother posting on a message board?
I was actually referring to the "we're on our way" comment.
In the original post, I was informing fellow Microsuck members that HP is propelling (eek!) Linux into truly 'big' time.
Pity I can't delete this derailed thread :(
-
Try posting actual content instead of this postcount++ bullshit. And can't you just close this thread?
-
And can't you just close this thread?
No, unless I'm missing something.
-
I'm delighted to inform fellow Microsuck members that another company has finally seen the light and is diving hed first into free software.
In a move that suggests
Linux is finally ready for prime time, Hewlett-Packard is giving the free software a bigger role on some of its toughest servers.
Martin Fink, long HP's (NYSE:HPQ - News) point man on Linux, now oversees the NonStop unit.
As the name implies, these are industrial-strength computers -- HP's most expensive -- built to never break down.
Fink's mission: Make sure Linux and other open-source programs that HP looks to use are up to NonStop standards.
And that mission remains unchanged after HP on Tuesday said it would cut its work force by 10% and make several other structural changes to boost its results, part of a long-expected shake-up from new Chief Executive Mark Hurd.
Open-source software has caught on fast as techies embrace the freedom it brings. Unlike most commercial applications, open-source software lets anyone see the underlying code. People can copy it, tweak it and give away their improved versions.
But Linux and other open-source projects have mostly stuck to less demanding tasks on smaller machines.
That's changing. Linux is working its way into bigger systems, and Fink says it won't be long before the software rules the data center server segment.
Bringing open-source software to the NonStop platform is a big part of HP's plan to catch that wave. Fink recently spoke with IBD about his plans for the unit.
IBD: How did you get to oversee two distinct parts of HP's business, Linux and NonStop?
Fink: It's not all that intuitive to a lot of people.
I want to stress that there's no temporary assignment here. My job with the open-source operation continues to be permanent, and my job with the NonStop group is a permanent assignment.
IBD: How are the jobs related?
Fink: Over the last five years, we have seen Linux grow from the edge of the network to running infrastructure to starting to grow into (other) lines of business.
If you map that out over time, what you see is Linux getting to the point of really penetrating the data center.
Now, we could have lots of arguments about what the timing is and the degree of penetration. Some will argue that there's already penetration today. That's probably true.
Others will say Linux is still not ready for the data center. That's also probably true. It depends on the customer.
IBD: Is Linux ready for NonStop?
Fink: Our engineers were already working together in a number of areas to combine open-source and NonStop stuff in innovative and creative ways.
Then we saw the opportunity to get ahead of this curve.
We know that's where open source and Linux will end up. We can start to drive and be ahead of that growth in open source.
We're taking all the experience, knowledge and capabilities we have in NonStop and seeing how we can bring all of those to open source.
IBD: What's your first order of business?
Fink: I still need to run NonStop as the NonStop business. So there's a piece of that in which I put Linux and open source aside and realize NonStop is responsible for running the stock exchanges of the world and connecting most cell phones.
Over the past seven weeks or so, I've been making sure I understand that business. There's an expectation that's very high from those customers that the business continues to run unaffected.
IBD: What about your Linux and open-source duties?
Fink: Another piece of the job is connecting the dots (between NonStop and open-source software).
IBD: What's going on there?
Fink: We have a number of things we're looking at -- some of those things I'm not ready to talk about yet.
They include everything from how to potentially get Linux running on NonStop to taking the 200 open-source projects that run on NonStop and turning that into 2,000 open-source projects running on NonStop.
There's a variety of other activities we think are interesting and will bring a lot of enterprise-class credibility to Linux and open source.
IBD: Your rivals have embraced open-source software to some degree. How is HP's approach different?
Fink: Dell (NasdaqNM:DELL - News) has yet to make moves to accelerate or support Linux and open source in any form in the data center.
They're basically jamming boxes through the supply chain. If Linux lands on some of them, they're happy. So there's no real competitive angle there.
IBD: What about Sun Microsystems (NasdaqNM:SUNW - News)?
Fink: Sun has tried a couple of different things. They've flip-flopped a number of times (between) "We love Linux" and "We hate Linux." Right now, they're on the "We hate (Linux software seller) Red Hat" (NasdaqNM:RHAT - News) bandwagon.
Solaris 10 (on Intel-compatible systems, which Sun recently made open source) has not gained a lot of traction from what we've seen. They're not a big competitive force.
IBD: And what about IBM (NYSE:IBM - News)? It's been a huge Linux booster.
Fink: IBM has long touted Linux on the mainframe.
Yet we don't see a lot of installations out there being used in a constructive way.
Rather than just do Linux on a mainframe, we want to bring those mainframe-class capabilities to Linux and open source. That's the part IBM hasn't done.
IBM talks loud about open source, but I don't see a lot of credibility there.
IBM hates the GPL.
(GPL is the general public license used by Linux and many other open-source programs. It requires anyone using the software to offer offshoot products as free, open software.)
They do everything they can to avoid the GPL because they don't like the GPL model.
What they're after is to chain (customers) to (its middleware platform) WebSphere and (its database software) DB/2.
IBD: Which makes sense from a shareholder perspective. What is HP after?
Fink: The Linux market is growing 30% to 35% a year. Our goal is to capture as much of that market as we can as it grows -- all of it if we can.
Happy now xyle_one? (http://www.microsuck.com/ubb/ubb/rolleyes.gif)
Fuck you for fucking up this thread by being sarcastic instead of giving constructive criticism. (http://www.microsuck.com/ubb/ubb/graemlins/fu.gif)
-
Fuck off, he could have at least put some effort into his post.
-
Fuck off, he could have at least put some effort into his post.
Well fucky fucky fuckety fuck you fucking fucktard fuck <3
Got auto-login, so replying to these kind of threads needs no effort at all.
THIS IS OPEN SOURCE SCENE DOODS YAY SPREAD THE WORD OF OUR LORD THE SAVIOUR etcetc
This is fun. More replys plz?
-
Fuck off, he could have at least put some effort into his post.
So, that's no reason for you to fuck up a perfectly good thread, retard. :thumbdwn:
-
I wonder how many servers they will sell?
-
If I remember right, HP was the first major vendor to sell linux-loaded desktops and laptops.
Although few models, and they arne't really out there for the general public (only enterprise level), it was nice to see _some_ support.
I am just wondering WHY they are waiting to offer linux for consumer laptops.
Everytime i look at laptops on dell, hp/compaq, gateway, ect, i always see "[insert name here] recommends MS Windows XP Professional".
I thought MS was no longer allowed to lock-in a vendor to only one OS. If this is right, then surely HP and other companies could benefit to selling Linux, as it would limit the support calls. Contrary to popular beleif, linux is easier to use because of the lack of usability problems (virii, spyware, corruption, ect). They would probably have fewer support calls
Granted, they probably don't pay a lot for support staff, given they are in India now, but still...
-
I dont care. This thread sucks.
-
I dont care. This thread sucks.
Since your opinion matters here, why not un-suck it? I'd love to hear your thoughts on the matter.
-
xyle_one, can it. this is my forum, posting a link is perfectly acceptable, so long as it is relavent. don't make me step in.
-
Ban me then, for getting pissed at a content-less thread posted with no effort that does absolutely nothing to further discussion about moving away from Microsoft. Oh how far this place has fallen. Fuck you.
-
Yes this thread helps promote Linux - an alternative to Microsoft Windows.
The reason you've pissed a lot off people here is not because you criticized someone's post, it's the way you went about it, you just made a rude and sarcastic comment rather than giving constructive criticism. Hopefully you've already realized this as your criticism of this post (http://www.microsuck.com/forums/showthread.php?p=99263#post99263) is a much more helpful. :)
-
I have, but seriously, come on. Is it so much to ask that as a contrinuting member to a community, that you at least put some effort into it? I didn't feel that I should have to be the one to make his thread worthwhile. Instead I chose to be a dick about it. I feel much better about it that way to be honest.
-
I thought MS was no longer allowed to lock-in a vendor to only one OS. If this is right, then surely HP and other companies could benefit to selling Linux, as it would limit the support calls. Contrary to popular beleif, linux is easier to use because of the lack of usability problems (virii, spyware, corruption, ect). They would probably have fewer support calls
Granted, they probably don't pay a lot for support staff, given they are in India now, but still...
BAH!
You are so TOTALLY wrong!
While Linux does not have viruses/spyware, the rest of the cake is full of bullshit, however.
Ever wonder why many people try Linux, and switch back to Windows? That is exactly because the limited feeling the OS gives ya:
1) all software must be installed from a centralized repository.
One can't just download .zip/.rar, unpack and run it, like in windows.
2) Lot's of things are modifiable, that's good, but when something WONT WORK you are screwed: all Linuxes are different in OS-design, and it is very difficult to even guess where the problem is ... unless the user is a self educated GNU/Linux expert like me.
3) Plug&Play is very different ... when user plugs in a device, it either works or doesn't work. No "install device" dialogs or whatever.
And every GNU/Linux distro has it's OWN scheme on how the actual plug&playu works, so there is no consistent feeling bout this ...
All in all GNU/Linux leaves most of those retarded end-users feeling kinda helpless, ESPECIALLY BECAUSE THEY ARE INDIVIDUAL CUSTOMERS.
Enterprise customers, eg. the workers of some large corporation, have their Linux desktops configured by and admin or two. But individual cumstomers have no such support whatsoever, and calling to some fucking "Lai-nux Support" is as stupid calling to some Microfuck "pay to listen while we talk shit" support ;)
What I meant to say, is that while users won't call the help-line for some "how can I remove viruses/spyware" or "how do i reinstall my OS" type of questions, they will be SPAMMING the poor helpdesk guys with questions like: "HOW CAN I INSTALL SOFTWARE? WHAT THE FUCK IS .tar.gz??", "WHY WON'T MY USB-PRINTER/SCANNER WORK??", "WHY CAN'T I OPEN MY Microsoft FORMAT DATA??", "I WANT !"... blah blah
Linux won't be getting anywhere on the Desktop markets, becayse that's the piece of the cake where all the technology-handicapped retards expect to be served with graphical abstractions of computer-system, in a way that should not even act like the computer systems really do.
Sad but true, Mac OS X and Windows do far better in this sector.
-
1) all software must be installed from a centralized repository.
One can't just download .zip/.rar, unpack and run it, like in windows.
Sure you can. Not much software is packaged this way, but it's been done. Mostly with proprietary software.
What I meant to say, is that while users won't call the help-line for some "how can I remove viruses/spyware" or "how do i reinstall my OS" type of questions, they will be SPAMMING the poor helpdesk guys with questions like: "HOW CAN I INSTALL SOFTWARE? WHAT THE FUCK IS .tar.gz??", "WHY WON'T MY USB-PRINTER/SCANNER WORK??", "WHY CAN'T I OPEN MY Microsoft FORMAT DATA??", "I WANT !"... blah blah
Get the users, and the software comes. And then users can download and click their software just like they want. And enough with the "printer doesn't work!" and "Office documents don't open!" stuff. They work. Simple as that. And if they don't, that's Hardware Manufacturer X's fault.
Linux won't be getting anywhere on the Desktop markets, becayse that's the piece of the cake where all the technology-handicapped retards expect to be served with graphical abstractions of computer-system, in a way that should not even act like the computer systems really do.
Linux provides nearly all the GUI that it can, as the OS. Windows doesn't provide the software installer, the software does. On Linux, it will come from the same place.
-
Sure you can. Not much software is packaged this way, but it's been done. Mostly with proprietary software.
Sure one CAN create self-installing packages, but the creation of such packages is MUCH HARDER for GNU/Linux than for other systems. The developers need to constantly add new ABI tests to their installers, in order to make the software compatible with the distros they support.
Ever heard of glibc breaking backwards compatibility? Yeah, that happens quite often (about every 2 years or so), and this makes binary packaging really hard: only way to be 100% sure the software can be run in the target distro, is to compile it from source, but this won't work for proprietary software.
This means, that the software developers MUST synchronize their update cycle with the distributions they wish to support! They must also build multiple binaries from their source, for every fucking system ABI layout their supported distros might have.
Like ... tail wags the dog. Sad isn't it?
Get the users, and the software comes.
Linux has enough users to attract developers to make software for Linux ... BUT, most developers become frustrated with the inconsistent ABI and lack of standards. Without a stable and backwards compatbile runtime environment, ISV's are forced to suck the cock of the people who happen to design the distros ...
And then users can download and click their software just like they want. And enough with the "printer doesn't work!" and "Office documents don't open!" stuff. They work. Simple as that. And if they don't, that's Hardware Manufacturer X's fault.
If a particular device won't work in the way user would want it to work, the fault is not only HW manufacturers: the target distro might have fucked up some plug&pray magic in the userland level, and nothing happens when user plugs in a device. Simple as that.
And because GNU/Linuxes have NO standards for plug&pray, then who can be blamed? The HW manufacturer, even if the driver would work? The distro developers, who make their own quick-n-dirty scripts to invoke some retard-proof plug&play magic?
In GNU/Linux no one can't be blamed for the things that are not standardized. And since nothing is standardized, the only thing we can bitch about is the reatarded system design most distros have these days.
And that ain't really no-ones fault, it's just the way how things work with OpenSource /,,/
It's better to do it all yourself than let some other guy decide how things should work. Since money can't be used to enforce standards, we are the kings of nothing ;)
Linux provides nearly all the GUI that it can, as the OS. Windows doesn't provide the software installer, the software does. On Linux, it will come from the same place.
In Linux there are NO standard installer ABI's. That is why I am trying to make this sandboxed binary runtime platform ... but nobody seems to care, since they are occupied on wanking on their favorite distros, and their Godly package managers (AAAAAH APT IS SO SEXY IM GONNA CUM LIKE A HORSE!)
There are package managers in Linux, but they are always integrated tightly with the distribution spesific system layout (and with the runtime convetions of the system), and so they are not suitable for universally installable packages ... heck, there is NO commonly accepted universal runtime standard, and that is sad.
LSB would be a very good (reference) runtime ABI for programs, but those fucking OSS-software developers won't bother releasing their software compiled with LSB conventions. And that's their fault ...
So this leaves end-users with two options:
1) To either stick with the distro-provided package management and package list, and never even try anything new
or
2) become a self-learned GNU/Linux "guru", who can compile and install
software from source-archives.
Oh yeah, and those external RPM/APT repositories WON'T COUNT. They are mostly monolithic collections, and so the dependencies and runtime conventions go with the repository. It is very common to fuck up one's package management by mixing different repositories with alike contents ...
Actually we have no Linux OS. We got multiple OS's, who just use the Linux kernel and GNU-userland. Everything else is decided by the distro developers, who create their own standards and solutions. So basically, we got no stable runtime, no stable ABI, no nothing.
-
We need more Linux distributions using Synaptic or Linspire CNR.
And programs distributing through installers like the ones that Firefox and UT 2004 uses.
It doesn't matter if a package management program is integrated in some geeky distro, as long as it works, it's easy to use and updated with the best of the best in Linux applications (and possibly more).
-
We need more Linux distributions using Synaptic or Linspire CNR.
WRONG!
Synaptic is INTEGRATED into the host system layout. You can NOT make universal packages to such system.
Most programs are prefixed to /usr at compile time. Donkey cock, i tell ya.
And programs distributing through installers like the ones that Firefox and UT 2004 uses.
It doesn't matter if a package management program is integrated in some geeky distro, as long as it works, it's easy to use and updated with the best of the best in Linux applications (and possibly more).
Blahblah ... nothing new.
Binary installers are okay, if the development team can keep up with the ever-evolving GNU/Linux distros they support. They need to add different binary builds, ABI checks and so forth ... and constantly upgrade the package as GNU/Linux distributors won't make shit to maintain backwards compatibility. That's just the way it is, and you people CAN'T deny it!
Package management need not be integrated into the underlying system layout, but unfortunately they currently are.
All software in some distro X is prefixed under /usr, has distro spesific dependencies and so forth ... donkey poooop!
One CAN make packages WITHOUT dependencies (eg. supply the needed libraries with the software, and use LD_LIBRARY_PATH or some other mechanism to use them), but that won't help when glibc breaks backwards compatibility AGAIN ;)
One solution is to prefix all binaries into some standard fake-prefix (eg. /0 ), and then use a LD_PRELOAD diverting sandbox to redirect STDIO, DL, EXEC and other GNU/Linux spesific system calls to the REAL installation directory. Also we need AN ENTIRE SUBSYSTEM which provides some extra glue for backwards compatibility, like different versions of glibc and stdc++ (since the FSF guys are having fun time breaking backwards compatibility), and some other ESSENTIAL libraries.
I already got a sandbox for this purpose, but i guess you people ain't interested, since you are jerking off at some Synaptic or whatever other gods you might worship ...
-
Sure one CAN create self-installing packages, but the creation of such packages is MUCH HARDER for GNU/Linux than for other systems. The developers need to constantly add new ABI tests to their installers, in order to make the software compatible with the distros they support.
The VMware installer is all perl. I can't read it, so I've attached it. Tell me about all the ABI tests and hard stuff they had to do.
This means, that the software developers MUST synchronize their update cycle with the distributions they wish to support! They must also build multiple binaries from their source, for every fucking system ABI layout their supported distros might have.
RealPlayer ships with one binary. In an RPM. And it works in Slackware, which isn't a distro the commercial developers try to support that often. Why is that?
Linux has enough users to attract developers to make software for Linux ... BUT, most developers become frustrated with the inconsistent ABI and lack of standards. Without a stable and backwards compatbile runtime environment, ISV's are forced to suck the cock of the people who happen to design the distros ...
How do you know they are complaining about the ABI, Mr. Armchair Commercial Software Developer? Do they talk about it all the time? Care to site anyone you know of? How come Real, VMware, id, whoever makes ut2k4, etc. don't ever complain?
If a particular device won't work in the way user would want it to work, the fault is not only HW manufacturers: the target distro might have fucked up some plug&pray magic in the userland level, and nothing happens when user plugs in a device. Simple as that.
Plug&Pray doesn't get messed up like that, as is my experience. Just an overgeneralization by you. Hardaware manufacturers don't make drivers. When they do, and these drivers don't work, then you can say this happens.
In Linux there are NO standard installer ABI's.
What would an "installer ABI" be? Mistaken English?
[qoute]hat is why I am trying to make this sandboxed binary runtime platform ... but nobody seems to care,[/quote]
Are you telling anyone, besides the dozen or so regular members here?
since they are occupied on wanking on their favorite distros, and their Godly package managers (AAAAAH APT IS SO SEXY IM GONNA CUM LIKE A HORSE!)
And you wank about how retarded Linux design is. We're even.
There are package managers in Linux, but they are always integrated tightly with the distribution spesific system layout (and with the runtime convetions of the system),
The layouts don't differ *that* much. Package prefixing into /usr is really common.
and so they are not suitable for universally installable packages ... heck, there is NO commonly accepted universal runtime standard, and that is sad.
I've never experienced problems related to this.
LSB would be a very good (reference) runtime ABI for programs, but those fucking OSS-software developers won't bother releasing their software compiled with LSB conventions. And that's their fault ...
A lot of people say LSB sucks. Saying "But it's a standard!" doesn't matter.
So this leaves end-users with two options:
1) To either stick with the distro-provided package management and package list, and never even try anything new
or
2) become a self-learned GNU/Linux "guru", who can compile and install
software from source-archives.
Installing from source requires three commands, yet everyone always talks about how haaaaaard it is, OMG! :rolleyes: A couple project even made GUI source code installers, like for XFce 4.2, but nobody made a "plug in a tarball and go" one, afaik. There was a thing called sourcer, but that was just automated at the command line, mainly for LFS users.
Oh yeah, and those external RPM/APT repositories WON'T COUNT. They are mostly monolithic collections, and so the dependencies and runtime conventions go with the repository.
They do give you more software, though. ;)
Actually we have no Linux OS. We got multiple OS's, who just use the Linux kernel and GNU-userland.
Yeah, how about that SuSE package I installed that time? Yeah, for a different OS. It works. ;)
[verwijderd door de beheerder]
-
I know WMD and others hashed a lot of this but this guy has some facts wrong.
Sure one CAN create self-installing packages, but the creation of such packages is MUCH HARDER for GNU/Linux than for other systems. The developers need to constantly add new ABI tests to their installers, in order to make the software compatible with the distros they support.
Actually, no. On Windows, one must download a install maker (which usually costs money) and use it to create the installer. Sounds right? Yeah. However on Linux, it's the same thing, in fact the standardized installer maker for Linux (defacto anyway) is Loki's installer tool which I have seen more instances of in the last three days than ever before (downloading games and proprietary software, it abounds). The tool is here:
http://www.lokigames.com/development/setup.php3
Ever heard of glibc breaking backwards compatibility? Yeah, that happens quite often (about every 2 years or so), and this makes binary packaging really hard: only way to be 100% sure the software can be run in the target distro, is to compile it from source, but this won't work for proprietary software.
Yeah there were several glibc compat breaks in the past, and frankly I agree that glibc needs to take more initiative to keep API compatibility. And yes, this *can* make binary redistribution difficult. However, you must remember that the people behind Glibc and the people behind Linux and even the people behind KDE are all *different* *people*. And up until the last few years they weren't in the limelight at all, which gave them little incentive to make users every dream and wish come true. In fact there are still a lot of developers who remember such a time and deliberately try not to given to a user's every whim. Given time and new leadership, things begin to change, for instance only introducing binary incompatibility with new major versions so that old major versions can be kept around.
There is the second point of Mono and what it will mean with further adoption. Binaries produced from Mono (like any other CIL-targetting compiler) will work on Windows, will work on Linux, will work on Mac OS X, Hell, if you can get a runtime working on BeOS you could run it. Of course this cross-platformness works with all different forms of Linux as well. I do believe it will be the saving grace for this problem if Glibc and other projects continue introducing problems.
By the way you forgot Linux with it's refusal to introduce a stable ABI for drivers. Linus claims he will never change his mind, but frankly, who cares if he never changes his mind because the opportunity is open for someone else to fix it with some magical solution. Stranger things have happened to Linux in the past, like the introduction of the Composite and Damage extensions. Everyone involved in X11 development (including me, slightly) expected alpha channels on Linux to appear as a single extension and have the semantics of blending windows together be built into the server, but because of the vast difficulty of doing that, an even better solution was devised that made an infinite amount of effects possible and implementable by any capable dev who tries.
Linux has enough users to attract developers to make software for Linux ... BUT, most developers become frustrated with the inconsistent ABI and lack of standards. Without a stable and backwards compatbile runtime environment, ISV's are forced to suck the cock of the people who happen to design the distros ...
Given your personal frustrations with working with inconsistent APIs and ABIs, I tell you that people are working on such matters, in fact, I am (komodoware.com).
If a particular device won't work in the way user would want it to work, the fault is not only HW manufacturers: the target distro might have fucked up some plug&pray magic in the userland level, and nothing happens when user plugs in a device. Simple as that.
Just as Windows may have fucked something up. Just the othe rday I tried to get my USB CD burner working (has worked fine in Windows forever) but Windows failed to notify me about me pluggin it in and in fact I had to go to the device manager (something few home users know about) and try to figure out why Windows wasnt using the correct driver for it. I went to the manufacturers site and got their drivers, but still no avail! Windows has just the same problems if not more. Just because you have had success on Windows and failure on Linux doesnt mean thats everybody's opinion. *Not* simple as that.
And because GNU/Linuxes have NO standards for plug&pray, then who can be blamed? The HW manufacturer, even if the driver would work? The distro developers, who make their own quick-n-dirty scripts to invoke some retard-proof plug&play magic?
Linux doesnt have standards for plug and play because neither does Windows. All plug and play is, is a marketing term describing a piece of software which detects the hardware and uses the proper driver. So who can be blamed when a device doesn't work? Blame your distribution! Duh! The people who put together the combination of individual tools into an OS are the people who have failed to get your USB mouse working or whatever.
In GNU/Linux no one can't be blamed for the things that are not standardized. And since nothing is standardized, the only thing we can bitch about is the reatarded system design most distros have these days.
And that ain't really no-ones fault, it's just the way how things work with OpenSource /,,/
Again, you are going on false facts here. You should hold your distro responsible for any problems using it. Thsi is why distros so commonly have someone working upstream with OSS projects, so that they can fix the problems users complain about. And please do not generalize "Open Source" to what you see on your Linux distribution. "Open Source" is 100% seperate from Linux. That's like jumping up and yelling at IBM (makers of OS/2 at one point remember) for Windows not working!! Remember Mac OS X is based on Darwin, which is open source.
It's better to do it all yourself than let some other guy decide how things should work. Since money can't be used to enforce standards, we are the kings of nothing ;)
You're right, but you just made all your previous points moot. If you want to do it yourself, it won't be easy. And I dont get the kings of nothing thing.
In Linux there are NO standard installer ABI's. That is why I am trying to make this sandboxed binary runtime platform ... but nobody seems to care, since they are occupied on wanking on their favorite distros, and their Godly package managers (AAAAAH APT IS SO SEXY IM GONNA CUM LIKE A HORSE!)
Since when does Windows have a "standard installer ABI" unless you mean the registry which maps installed applications to uninstallers. That in itself is merely in it's beginning stages on Linux. All it takes is the right toolkit for developers to use which would interact with the package manager which was available on the machine and do the right stuff. There are several solutions in library and program form which attempt this. By moving the process of dealing with multiple package managers into the toolkit, the application is free to do what it was designed for. And if Mono does in fact keep growing in usage a copy of the toolkit can be sent with the binary distribution of your software so that it really will work everywhere. Such libs are small of course, what with not including the other stuff that comes with packages such as documentation, source code, and the overhead of the package manager being used. Once Linux distros get used to this happening with proprietary software, they'll create tools which remove unneeded libraries and files from the individual install directories (remember that toolkit?).
There are package managers in Linux, but they are always integrated tightly with the distribution spesific system layout (and with the runtime convetions of the system), and so they are not suitable for universally installable packages ... heck, there is NO commonly accepted universal runtime standard, and that is sad.
The package manager and filesystem layout are perhaps the only large inhibitors of such packages: the ABI of most projects *does* stay pretty stable, and this is just reiterating my point about using Mono once again (there is no such thing as ABIs, only APIs--you can add members anywhere and dependent code will still use the correct struct/class layout).
Funny you should mention filesystem layout when the toolkit I've been so fondly speaking of has just this feature. The ability to define the layout of the filesystem from a system global layout file in a place that doesn't require you to bend over to a certain layout in order to support it. Right now (its still pre-release) it reads /.System/Layout.xml. On the system it was built for (my Komodo distribution) it maps stuff to /System, /System/Software, /Software, /System/Temp etc but on a traditional Linux system a Layout.xml could specify to use paths like /usr/bin/ and /etc. Again, the app doesn't really care.
LSB would be a very good (reference) runtime ABI for programs, but those fucking OSS-software developers won't bother releasing their software compiled with LSB conventions. And that's their fault ...
You act as though we have a big OSS-dev only forum where we all communicate and coordinate. The open source community is not a single body, so it takes time for things like that to be adopted. You wait half a year and I'm sure there will be a lot more support. Also, 90% of the code which comprises GNU/Linux distributions is built with autotools, which generates the "configure" files that choose how the software should be compiled. If autotools put some work into building with LSB conventions by default, the code doesnt even have to change. Rarely do C/C++ based projects write their own makefiles.
So this leaves end-users with two options:
1) To either stick with the distro-provided package management and package list, and never even try anything new
or
2) become a self-learned GNU/Linux "guru", who can compile and install
software from source-archives.
Or, try new distributions which incorporate these new technologies which better Linux and make it easier to do such things. In fact, yeah you could stick with your big distros which are established because they are all becoming LSB compliant anyway. SuSE *has been* LSB compliant for quite awhile, and googling "LSB linux distribution" gets many hits about distributions aiming for LSB, consortiums declaring LSB dedication, etc.
Oh yeah, and those external RPM/APT repositories WON'T COUNT. They are mostly monolithic collections, and so the dependencies and runtime conventions go with the repository. It is very common to fuck up one's package management by mixing different repositories with alike contents ...
Yes you are right, but I know for a fact that the "runtime conventions" are not as different as you think. My distro can install software from RPMs, Slackware PKGs, and DEBs, without many problems. Of course, it's not perfect: Fedora's RPMs don't work with the RPM unpacker we have but all that can be improved as development continues.
Actually we have no Linux OS. We got multiple OS's, who just use the Linux kernel and GNU-userland. Everything else is decided by the distro developers, who create their own standards and solutions. So basically, we got no stable runtime, no stable ABI, no nothing.
[/QUOTE]
Yes but like I asserted earlier, these "multiple Linuces" are not that different from each other, and even though a lot of Linux software is still pretty stupid in relation to where they install stuff, it's all a matter of the efforts for cross-Linux compatibility which *are* active and which *are* producing things. Even smaller developers such as myself think about these problems and apply more force to fixing them. You put a ton of OSS devs in front of one of you people and we soak it all in. We think deeper about where the merit is and try to fix it. The beauty is, any capable free Linux developer who came across these posts would only be emplored to find solutions.
That's my conclusion. Heartbreaking baby.
-
http://www.lokigames.com/development/_img/shots/setup-1.jpg
What widget set is that? It's the foul-ass one that xine uses - I find it repulsive.
-
Dunno, but Lokis installer/updater use GTK (i think thats a gay custom theme)
-
XINE uses Xt, which I do believe that installer also uses. Xt (the X11 toolkit) comes with just about every X11 distribution known to man. They used it for that reason.
And Loki's new installer which is used for all their newer stuff (of course their dead now) uses GTK but also provides a text install interface. Perhaps they too said that Xt is ugly and definitely crappy and started using GTK.
-
I dont care. This thread sucks.
Wow! What an immature idiot you are. If you don't like the thread stop fucking replying to it!!!!!
-
Oh okay sure thing boss.
Er, I mean, allow me to add extra punctuation so you know I am serious.
OH OKAY SURE THING BOSS!!?!?!!!?!111!!
-
XINE uses Xt, which I do believe that installer also uses. Xt (the X11 toolkit) comes with just about every X11 distribution known to man. They used it for that reason.
And Loki's new installer which is used for all their newer stuff (of course their dead now) uses GTK but also provides a text install interface. Perhaps they too said that Xt is ugly and definitely crappy and started using GTK.
How much of a pain in the ass would it be to have the program itself decide which widget set would be most appropriate? Like, for example, when installing a program, it would look around and see if you had Lesstif. If you do, it uses it - if not, it looks for the next item on its preferred widget list, like gtk or qt or something. And users themselves would be able to select their favorite toolkits, allowing them to define a look for their system.
I guess it would be kinda hard. I think a really generic program ought to try it. Search for a toolkit, and then generate the gui dynamically. That would be cool. Some oldskool fool could even set everything to run curses style, like Slackware tools.
-
It would be a terrible pain in the ass.
Unless you created a proxy toolkit which maps to a bunch of other toolkits
-
Well, that would be nothing more than a file that matches widgets from the toolkits up with generic names. Using such a proxy kit could allow someone to transfer an old program into some other widget set. ie, using the proxy to change qt to generic, and then use the proxy in the other direction to change generic to gtk. I think it could work, if the proxy was smart enough.
-
HP's has a record of botching Windows software to make their hardware run. Can this be considered to be any real help to Linux?
Or will they require you to run their invasive software as they do on Windows?
-
I can just see it now... they end up having to build a perfect Windows emulation layer of their own (since, being a company, they couldn't stand to use Wine) just so they can run some stupid little corporate spyware apps to bother you and ask you if you thought about tech support today, or to make that little USB card reader work.
-
Simple soulution, if you want to release a closed source application on a system that depends on something that might break like uhm, glibc, you use an intermedite language. Microsoft .NET has MSIL, I think Mono or DotGNU have something similar. The intermediate language is bytecode (kinda like java, but better) that compiles into native code and well, works. That is much smarter and could allow packaging to be a lot easier.
-
i think its great that linux is moving up in the world. im going to have unix on my server soon just to see what its like.
-
AH <3
At last some well-thought criticism to reply.
I know WMD and others hashed a lot of this but this guy has some facts wrong.
I might have, since I have only done some small admin progs for my friends ... and even with those ABI breaks with C++ sucks donkey cocks.
Why I code admin stuff with C++? Maybe I am a total noob, but C++ just makes code mainteinance easier, since with method/data capsulation it is easy to separately test different components.
Actually, no. On Windows, one must download a install maker (which usually costs money) and use it to create the installer. Sounds right? Yeah. However on Linux, it's the same thing, in fact the standardized installer maker for Linux (defacto anyway) is Loki's installer tool which I have seen more instances of in the last three days than ever before (downloading games and proprietary software, it abounds). The tool is here:
http://www.lokigames.com/development/setup.php3 (http://www.lokigames.com/development/setup.php3)
Loki Installer is actually better than any of those RPM or DEB alternatives, since it enforces the software's binaries and libraries to be runtime-relocatable. And that is just a king idea.
Yeah there were several glibc compat breaks in the past, and frankly I agree that glibc needs to take more initiative to keep API compatibility. And yes, this *can* make binary redistribution difficult. However, you must remember that the people behind Glibc and the people behind Linux and even the people behind KDE are all *different* *people*. And up until the last few years they weren't in the limelight at all, which gave them little incentive to make users every dream and wish come true. In fact there are still a lot of developers who remember such a time and deliberately try not to given to a user's every whim. Given time and new leadership, things begin to change, for instance only introducing binary incompatibility with new major versions so that old major versions can be kept around.
Major-Version ABI compatibility scheme is a good practice. Heck, even the LSB people consider this to be a "best practice". It should be THE ONLY PRACTICE for GNU linker based system, till sombody invent some mechanics to do the versioning at binary level (the ELF binary format has symbol versioning properties, but only GNU "base-system" developers use these).
And I think that no "FREE SOFTWARE IS THE ONLY WAY" -type dickheads should not be allowed leadership in ANY OSS project. Those guys forcefully break the binary compatibility, just because it makes source-distribution the only viable way for the apps that use the software component ;)
I might be a little fascist, but I do know the needs of the Enterprise (the the God and the Devil of our realm). Current world needs enterprises, so GNU/Linux MUST fit in, and respect the enterprise, or no mass exodus for proprietary apps will happen ...
There is the second point of Mono and what it will mean with further adoption. Binaries produced from Mono (like any other CIL-targetting compiler) will work on Windows, will work on Linux, will work on Mac OS X, Hell, if you can get a runtime working on BeOS you could run it. Of course this cross-platformness works with all different forms of Linux as well. I do believe it will be the saving grace for this problem if Glibc and other projects continue introducing problems.
Hmm.
I know that mono compiles the userland-spesific parts of the binary into a bytecode ...
but will this slow down program execution?
I mean, can the bytecode-part of MONO be compiled into a platform-spesific binary form on the fly, and then use the precompiled binary instead of the mono-binary?
If this is so, then I personally think MONO will be the best shot our chaotic GNU/Linux scene has. Otherwise it will never be adopted by enterprises, which require runtime compatibility across all Linuces.
By the way you forgot Linux with it's refusal to introduce a stable ABI for drivers. Linus claims he will never change his mind, but frankly, who cares if he never changes his mind because the opportunity is open for someone else to fix it with some magical solution. Stranger things have happened to Linux in the past, like the introduction of the Composite and Damage extensions. Everyone involved in X11 development (including me, slightly) expected alpha channels on Linux to appear as a single extension and have the semantics of blending windows together be built into the server, but because of the vast difficulty of doing that, an even better solution was devised that made an infinite amount of effects possible and implementable by any capable dev who tries.
Yeah, and this is why I think Linus is not so good leader after all. He is more like a visionary than a practical leader.
Bill Gates thought right, when they implemented their HAL component, and made a stable driver ABI.
If somebody knew enough Linux kernel driver programming, he/she could easily make a kernel module, which implements some high-level driver ABI abstraction like the NT-kernel HAL. This ABI abstraction would incur some overhead, since the abstraction needs to be binary-compatible even if the kernel subsystems get redesigned ... and that means that this Linux-HAL would need to implement quick-n-dirty patches to make things work.
Linux binary HAL would be possible, but it would need A LOT of designing so, that the base-ABI interfaces can stay the same for years to come. Very hard this would be i tell ye ...
Given your personal frustrations with working with inconsistent APIs and ABIs, I tell you that people are working on such matters, in fact, I am (komodoware.com).
And I appreciate your distro. Just watched the documentations on yer platform runtime, and it is just GREAT.
At last somebody understood, that consistent runtime ABI is a good thing ;)
Let's hope that enterprise starts embracing your platform/runtime. Would make LSB-groups work easier, if they could integrate your ideas into their own, and so acquire a stable runtime for the third-party proprietary ISV's <3
Just as Windows may have fucked something up. Just the othe rday I tried to get my USB CD burner working (has worked fine in Windows forever) but Windows failed to notify me about me pluggin it in and in fact I had to go to the device manager (something few home users know about) and try to figure out why Windows wasnt using the correct driver for it. I went to the manufacturers site and got their drivers, but still no avail! Windows has just the same problems if not more. Just because you have had success on Windows and failure on Linux doesnt mean thats everybody's opinion. *Not* simple as that.
Linux doesnt have standards for plug and play because neither does Windows. All plug and play is, is a marketing term describing a piece of software which detects the hardware and uses the proper driver. So who can be blamed when a device doesn't work? Blame your distribution! Duh! The people who put together the combination of individual tools into an OS are the people who have failed to get your USB mouse working or whatever.
In commercial world one can blame the guys who make the third-party app/driver, or the dickos who make the crappy OS ;)
Reporting bugs is always taken into account, since the corporations behind the software are monetarily bound to satisfy their customers.
In OSS world nobody is responsible for anything. The GPL license even ENFORCES this policy. Some company CAN take some responsibility on some degree, like issuing a customer a working GNU/Linux server system, but even they cannot guarantee that some individual component in the OS works.
Again, you are going on false facts here. You should hold your distro responsible for any problems using it. Thsi is why distros so commonly have someone working upstream with OSS projects, so that they can fix the problems users complain about. And please do not generalize "Open Source" to what you see on your Linux distribution. "Open Source" is 100% seperate from Linux. That's like jumping up and yelling at IBM (makers of OS/2 at one point remember) for Windows not working!! Remember Mac OS X is based on Darwin, which is open source.
Yeah, here you are right ...
Maybe i should just blame Linux coders spesifically.
Would this make people listen to me more? Possibly ;)
You're right, but you just made all your previous points moot. If you want to do it yourself, it won't be easy. And I dont get the kings of nothing thing.
Since when does Windows have a "standard installer ABI" unless you mean the registry which maps installed applications to uninstallers. That in itself is merely in it's beginning stages on Linux. All it takes is the right toolkit for developers to use which would interact with the package manager which was available on the machine and do the right stuff. There are several solutions in library and program form which attempt this. By moving the process of dealing with multiple package managers into the toolkit, the application is free to do what it was designed for. And if Mono does in fact keep growing in usage a copy of the toolkit can be sent with the binary distribution of your software so that it really will work everywhere. Such libs are small of course, what with not including the other stuff that comes with packages such as documentation, source code, and the overhead of the package manager being used. Once Linux distros get used to this happening with proprietary software, they'll create tools which remove unneeded libraries and files from the individual install directories (remember that toolkit?).
In Windows they got this Install Shield -API, which is quite a good ... it contains all stuff like authorization, dll installing, software registering and the like ...
Most proprietary software use Install Shield. OpenSource Win32 software may not use it, but it is their problem. Install Shield makes things easier, since it is designed to be the Ultimate Installer for the Win32. It is even partly integrated into the userland, if i recall right ...
I like the idea of many-to-one pkgmanager toolkit library. Woudl require a LOT of work, since library dependencies are distro-spesifically named ... and only in RPM libraries are automatically mapped to provide soname dependencies.
The package manager and filesystem layout are perhaps the only large inhibitors of such packages: the ABI of most projects *does* stay pretty stable, and this is just reiterating my point about using Mono once again (there is no such thing as ABIs, only APIs--you can add members anywhere and dependent code will still use the correct struct/class layout).
Funny you should mention filesystem layout when the toolkit I've been so fondly speaking of has just this feature. The ability to define the layout of the filesystem from a system global layout file in a place that doesn't require you to bend over to a certain layout in order to support it. Right now (its still pre-release) it reads /.System/Layout.xml. On the system it was built for (my Komodo distribution) it maps stuff to /System, /System/Software, /Software, /System/Temp etc but on a traditional Linux system a Layout.xml could specify to use paths like /usr/bin/ and /etc. Again, the app doesn't really care.
Okay, enough of the ABI then.
This does NOT take awat the problem with statically --prefix:ed binaries.
Prefixing just sucks! Binaries/Libraries should just get their runtime location, and use HFS2.3 standard to get their data from relative (../share/) location(s). This runtime-relocation would allow users/userland-admin-apps to install software into isolated places, so that they can be easily managed without some huge database.
I just mean ... currently pkgmanager database is THE only thing that keeps userland working. Without it would be hard to do package upgrading. With runtime-relocatable binaries/libraries it is possible to place them into isolated locations, and trust that they got all the /lib, /bin or whatever components they specify during installation. In this model we would use pkgmanager database ONLY AS A CACHE to make quick file/soname search across multiple packages. Neat huh?
BTW Win32 API has this GetModuleDirectory() which just rocks! It is actually the only thing I like in Win32 API. Makes binary distribution very easy, since one can make just one .zip with a decent directory layout, and get all the data with paths relative to GetModuleDirectory() at runtime ;)
You act as though we have a big OSS-dev only forum where we all communicate and coordinate. The open source community is not a single body, so it takes time for things like that to be adopted. You wait half a year and I'm sure there will be a lot more support. Also, 90% of the code which comprises GNU/Linux distributions is built with autotools, which generates the "configure" files that choose how the software should be compiled. If autotools put some work into building with LSB conventions by default, the code doesnt even have to change. Rarely do C/C++ based projects write their own makefiles.
I was talking about the GNU/Linux OSS scene ... they have multiple forums yes, but some of them are straight flaming the guys at LSB. Some people diss about "LSB being enterprise whore standard" or such (a little exagerated example ;). Many GNU/Linux coders who I know are complaining about "the extra steps needed to make an LSB binary" and most comlain about this "RPM IS THEIR PACKAGE FORMAT AND IT SUCKS" thing ... tho lsb apps need not be RPM packaged, if they conform with LSB standards otherwise.
GNU/Linux distribution developers need not but read the LSB spec, hack their userland a bit and that would do it. Still, many refrain from such.
I just hope LSB becomes something big, among with this Komodo system which just plain rules (proof of consept), so that these egoistic hippie-commie coders would need to eat their words, suffocate and die of shame ;)
Or, try new distributions which incorporate these new technologies which better Linux and make it easier to do such things. In fact, yeah you could stick with your big distros which are established because they are all becoming LSB compliant anyway. SuSE *has been* LSB compliant for quite awhile, and googling "LSB linux distribution" gets many hits about distributions aiming for LSB, consortiums declaring LSB dedication, etc.
And this I do. I am using Debian, which has a decent LSB-runtime support.
Yes you are right, but I know for a fact that the "runtime conventions" are not as different as you think. My distro can install software from RPMs, Slackware PKGs, and DEBs, without many problems. Of course, it's not perfect: Fedora's RPMs don't work with the RPM unpacker we have but all that can be improved as development continues.
Well these distro's were at the same glibc/stdc++/gcc versioning interval, so of course such binaries will work.
However, if some of those go out of sync between two distros, then there MAY be problems.
I am waiting a lot of problems with bin-compat when some distros go and move to GCC 4.0 (GCC 4 broke stdc++ ABI, hooray for the GCC/glibc hippies!) =)
Yes but like I asserted earlier, these "multiple Linuces" are not that different from each other, and even though a lot of Linux software is still pretty stupid in relation to where they install stuff, it's all a matter of the efforts for cross-Linux compatibility which *are* active and which *are* producing things. Even smaller developers such as myself think about these problems and apply more force to fixing them. You put a ton of OSS devs in front of one of you people and we soak it all in. We think deeper about where the merit is and try to fix it. The beauty is, any capable free Linux developer who came across these posts would only be emplored to find solutions.
That's my conclusion. Heartbreaking baby.
I am grateful for the cross-linux solutions you and your Komodo comrades do. They really seem to be intuistic and genuine, not like the most other (APT, RPM) out there ...
I just added your project into my list of OSS projects, that bring something revolutionary to GNU/Linux. The other project is LSB, but heck, they are already being pissed on by some smaller OSS-devs ;D
You gonna make a distro-neutral Komodo-runtime package?
Anyway, I can't apology enough for my coarse language. I am just fed up with the people who make this scene roll ... many of them (especially the "leader" characters like Linus and Richard Stallman) have idealisms that seem too unpractical for the enterprise.
That is why we NEED middleware app-components and standards like Mono/LSB/whatever to make The Holy Enterprise to believe into GNU/Linux ... but these technologies are being pissed on by the zealots who think OpenSource distribution is the only way. Sigh.
Umm ... I got nothing more to say this time.