Stop Microsoft
Operating Systems => Linux and UNIX => Topic started by: Kintaro on 8 July 2005, 23:24
-
Well at first this seemed like a reasonable distro, however it has its problems. Firstly, it is shit. It is just plain shit. How anybody can stand this crap I have no idea. It ships with an outdated version of gaim which happens to crash all the time. Constantly, so I have to compile my own (outdated gaim also in apt repository). More than that it ships with no development utilities, making it a pain in the arse to build my own kernel. Also my panel has decided to crash and die on me, due to something or another hanging (one of the applets). It has another dozen inconsistant problems which make it a bucket of flaming crap.
I think I will go back to Fedora Core 4 which treated me much better.
-
It's not for power users. For someone you want to turn onto Linux though, it's perfect.
Why are you bothering with Fedora? Why not Debian?
Sorry, just can't stomach a RPM-based distro -- ever again.
-
If I have been keeping user track correctly,weren't you X11 a while back?If so,didn't you used to cuss out fedora and hype slackware in your sig?what the hell happened?
-
It's not for power users. For someone you want to turn onto Linux though, it's perfect.
Why are you bothering with Fedora? Why not Debian?
Sorry, just can't stomach a RPM-based distro -- ever again.
So what exactly is wrong with RPM? IMO the packaging format
is nearly perfect, since RPM build process automatically
lists provided sonames in library packages, and software
packages automatically list required sonames.
With this it is very easy and quick to build entire
distribution profiles, just making sure that one soname
is provided only by one package at a time ;)
This also enables proprietary binary-only software
packages (in theory), to depend upon some soname in
the host distro, and then the created package can be
installed in any distro that has the same soname
provided in some other package. Neat huh?
And as we know, sonames are often generated
at compile time by using GNU Libtool and it's versioning
system, so library names are VERY dependable.
In debian you have to know the package names in order
to define dependencies. Packages do not list the sonames
they provide, or require direct sonames.
This sucks, since package names are mostly distro-spesific
crap -> makes distro-wide packages impossible.
The only thing that has made RPM distro's hard to maintain
in the past years has been the lack of higher-level
package management. But today with innovations like
urpmi, yast2, yum and such ... this is not really
the problem.
The only thing RPM -standard lacks are reverse-dependency
handling. Eg. if i have a program linked to libfoo.so.0.2.0,
so that it uses all the interfaces provided, it will
be linked to libfoo.so.0 (0 = the interface version linker
understands). And let's pretend we install this program
to another system, where we have a little older version
of our library, libfoo.so.0.1.0, installed. Our program
installs just fine, since libfoo.0.1.0 provides libfoo.so.0 ...
but when you run the program, it won't run since some
functions that are implemented in libfoo.so.0.2.0 are
not present in libfoo.so.0.1.0 ! Got the idea?
The solution would be to create an add-on package management
platform, that tracks not only direct-soname-deps, but
also reverse-soname-deps. And in this case it would
notice that the 'age' of libfoo in the target system is
too small, and would suggest to install a newer version
of libfoo.
Anyways, i've started to do somekinda "multi-distribution
runtime platform", eg. a separate subsystem with it's
own libraries and subsystems (platforms like KDE, Gnome).
My idea is to give user the possibility to install
software in non-static locations, like /usr/local/Apps
/home/luser/Apps etc ...
I hate the way modern distro's integrate every fucking software
into their /usr -hierarchy, instead of using package-spesific
installation directories. LSB-standard suggests that
non-base-system components should go to /opt/,
tho most distro's fuck this up without a good reason.
-
Check this (http://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt) out. If someone took that concept ("package users"), made it more user-friendly, built "package user" packages (possibly listing dependencies etc), and got it working in some distros... It would totally own (nothing could beat it on security, just what we need if/when more viruses become available on GNU/Linux).
I wanted to give it a go myself but I wouldn't know where to start (well... I might...).
Package users!... Absolute genius IMO.
-
Check this (http://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt) out. If someone took that concept ("package users"), made it more user-friendly, built "package user" packages (possibly listing dependencies etc), and got it working in some distros... It would totally own (nothing could beat it on security, just what we need if/when more viruses become available on GNU/Linux).
I wanted to give it a go myself but I wouldn't know where to start (well... I might...).
Package users!... Absolute genius IMO.
No.
That system has no good groundbreaking ideas ...
no good dependency handling, no automation. Bah.
btw that still reverts to some perverted old unix-crap, like
'having to run ldconfig as root'. I've never had problem
creating a package/user spesific ldcaches so forth ...
If you wanna do something really cool, then try to
develop a package-build-system, that
1) works perfectly with other lower level systems like
gnu/autotools, linux kbuild, java ant-build, scons etc ...
2) has a tightly spesified standard, which does not allow
to define multiple versions of same package. all relevant
options, like --march optimisations, should be included
to the package ... maybe by compiling the binaries multiple
times
3) adds USEFUL metainformation to packages, allows
one to totally get rid of direct-dependencies.
Most needed libraries/othercrap would be integrated
to the package ... etc
And so forth etcetera ...
-
I really don't care about the technical superiority or inferiority of RPMs; all I know is that every distro based on it sent me into the so-called RPM dependency hell. My past experiences with it void any merits it may have.
apt-get, on the other hand, has worked flawlessly. Nearly every Linux app that's not niche is included in Debian's package library. On top of that, I'm running Debian on PPC, a minority platform that usually gets the shaft when it comes to binary packages; the coverage has been equal between PPC and x86 in Debian.
-
No.
That system has no good groundbreaking ideas ...
no good dependency handling, no automation. Bah.
That's why I was thinking that a system that uses package users and has this dependency handling etc., would be pretty damn cool.
And the package users idea is ground shattering!
The hint I refered to, was (largely) specific to Linux From Scratch (LFS). It was NOT designed to be automated and have dependency tracking or whatever. A package manager based on those concepts, with automation, dependency tracking, et cetera, would be pretty damn sweet.
-
One of the members here had a pretty groundbreaking idea not too long ago in the form of Linux binary packages. Don't remember if it was Jeff or Stryker though. :(
-
One of the members here had a pretty groundbreaking idea not too long ago in the form of Linux binary packages. Don't remember if it was Jeff or Stryker though. :(
How about we make the packages normal ISO -images?
The packages would contain some XML-files, who would define
the sonames and other required interfaces (services, binaries etc) ... and these files would ALSO define the URL:s where
one could download the needed packages. Like bittorrent or
something.
And if an ISV want's to make a retard-proof click'n'pray
application, he would just include ALL the
libraries/binaries that are not defined in LSB.
Then in the target-distro a user could simply click the
ISO, mdm-platform daemon would loop-mount the ISO,
generate a sandbox (with sonames, ldcache, binary assignements
etc) and Run The Damn thing! =)
This idea I got by browsing some developer comments from
the Darwind project. Damn those Apple guys made a
fine job with that Frameworking system.
-
I like Apple's package system. One program, one file, one folder for all. This does raise some issues with dynamic vs. static linking, but that hasn't caused much trouble yet.
-
One of the members here had a pretty groundbreaking idea not too long ago in the form of Linux binary packages. Don't remember if it was Jeff or Stryker though. :(
I believe it was Jeffberg and theJimmyJames who came up with the idea. They kicked ideas around with some guy whose name I have forgotten, and were going to build their own distro, Komodo Linux, with a GUI called GenSTeP.
-
I really don't care about the technical superiority or inferiority of RPMs; all I know is that every distro based on it sent me into the so-called RPM dependency hell. My past experiences with it void any merits it may have.
apt-get, on the other hand, has worked flawlessly. Nearly every Linux app that's not niche is included in Debian's package library. On top of that, I'm running Debian on PPC, a minority platform that usually gets the shaft when it comes to binary packages; the coverage has been equal between PPC and x86 in Debian.
Where you been, slick? apt works on rpm distros too. Granted, there aren't as many packages as there are for Deb, but it works just the same. There's even a GUI interface to it called synaptic that lets you click to install or remove packages. I use it in Fedora.
Unfortunately, I can't tell what the integration is like with non-apted rpms. Like, if you download an rpm and install it yourself, I don't think apt will know about it. That could be improved on. It certainly would be nice to have one central database that tracks every program on the machine, whether you installed it from source, package, or package manager.
-
One of the members here had a pretty groundbreaking idea not too long ago in the form of Linux binary packages. Don't remember if it was Jeff or Stryker though. :(
Any of yas remember what this groundbreaking idea was more specifically?
-
Any of yas remember what this groundbreaking idea was more specifically?
The idea was to have packages work like OSX programs. For a good example, take gimp.app - it uses a wrapper script to add all the dependencies to the package. So instead of linking off system libraries, it links off its own resources.
Not all that well thought out - certain libraries, like gtk, atk, pango, freetype, and glibc, for example, need to be used over and over again, which is why we link and share them in the first place. No offense to those guys, because they were cool, and at least tried to turn their ideas into reality, but they were thinkers, not doers.
I think synaptic is just about the best idea I've seen yet. If you want to install a package, it tells you what dependencies need to be met, and one click allows you to add all these extra packages. Unfortunately, the rpm repositories don't have the latest and greatest apps and versions. The system tends to break down when you're dealing with a package like ffmpeg, which is updated almost daily, but released only every six months. The only way to get a working copy is to go CVS - any package built off a release is completely unsupported. And ffmpeg is damn difficult to install from source. So you have to go outside the system occasionally.
-
I think there should be ONE repository storing packages (not deps or rpms, something designed specifically for this purpose with a name like "universal package") that contain the original source code for the package and some patches (INSIDE the "universal package" (like in a "/patches/" directory), at least for the more important patches) required to make the code compile cleanly under whatever circumstances, or add some important funcionality, or fix bugs. Then the distributors could WORK TOGETHER to keep this repository UP TO DATE, and release their own experimental patches that, once tested (by the distribution users, and by users of other distributions. Probably have a part of the repository house the expirimental patches.) and deemed secure and stable enough, get added as a patch to the "universal package".
ANYBODY could download a "universal package" straight from the repository, compile it and install it easily (using frontends, perhaps something like synaptic, and it could ask the user which patches to apply) on ANY distribution.
The distributors could even compile the "universal packages" for their users, and package them in RPM or DEP format and put them into their own repository. They would still gain from faster bug and security fixes. The only thing that would be missing is the users control over which patches are in use (which might cause issues for users of certain (noob) distributions). But it would have it's benefits.
Other operating systems (not just distributions) could also take advantage of this large repository of software, particularly GNU/Hurd, *BSD, and some more (probably). Just like the GNU/Linux distributions, they could provide their own patches (experimental or otherwise) to make the package work on that OS.
The authors of the software, of course, could take patches from the repository and apply them for the next release.
-
I think there should be ONE repository storing packages (not deps or rpms, something designed specifically for this purpose with a name like "universal package") that contain the original source code for the package and some patches (INSIDE the "universal package" (like in a "/patches/" directory), at least for the more important patches) required to make the code compile cleanly under whatever circumstances, or add some important funcionality, or fix bugs. Then the distributors could WORK TOGETHER to keep this repository UP TO DATE, and release their own experimental patches that, once tested (by the distribution users, and by users of other distributions. Probably have a part of the repository house the expirimental patches.) and deemed secure and stable enough, get added as a patch to the "universal package".
ANYBODY could download a "universal package" straight from the repository, compile it and install it easily (using frontends, perhaps something like synaptic, and it could ask the user which patches to apply) on ANY distribution.
The distributors could even compile the "universal packages" for their users, and package them in RPM or DEP format and put them into their own repository. They would still gain from faster bug and security fixes. The only thing that would be missing is the users control over which patches are in use (which might cause issues for users of certain (noob) distributions). But it would have it's benefits.
Other operating systems (not just distributions) could also take advantage of this large repository of software, particularly GNU/Hurd, *BSD, and some more (probably). Just like the GNU/Linux distributions, they could provide their own patches (experimental or otherwise) to make the package work on that OS.
The authors of the software, of course, could take patches from the repository and apply them for the next release.
HAHA! OLD!
There is nothing new in this idea ... tho this is quite good.
Heard of Gentoo GNU/Linux anyone?
I used that about a year, and it had all of the idea's above.
It was a decent distro, but started sucking cock later on ...
Installing software bloated my HD with development
headers, and there was NO mechanism to do reverse-dependencies,
eg. I could not easily remove an already installed package.
I'm back to Debian.
-
Heard of Gentoo GNU/Linux anyone?
Yes.
I used that about a year, and it had all of the idea's above.
Yes but Gentoo have their repository, Debian have theirs, Mandriva has theirs... It's all fucked up. If you read the first sentence properly:
I think there should be ONE repository
I've never used Gentoo, though I might try it after this. I've started looking at FreeBSD, and it's package management is pretty similar (and very fecking good).
If this "ONE" repository existed and the distributors took it seriously, I see no reason that the FreeBSD guys couldn't contribute to it too... They get mostly the same bugs and security advisories as us (take a look (http://www.freebsd.org/security/)).
-
Can't help but notice that this is a form of computer fascism. Consolidating and centralizing power/packages leads to dependence and inefficiency. Overall, Linux is not developed this way, and the community will resist your attempts to steer it toward some sort of homogenization. The "do what you like" marketing theory has been put to the test, and actually seems to produce quality products. If you start making everybody do the same thing, that's Microsoftism.
The beautiful thing about standards is that there are so many of them!
-
If this "ONE" repository existed and the distributors took it seriously, I see no reason that the FreeBSD guys couldn't contribute to it too... They get mostly the same bugs and security advisories as us (take a look (http://www.freebsd.org/security/)).
That is not possible.
You see, every Linux-distro has it's own base-system.
Each piece of software is uniquely tailored for
this base-system, statically --prefixed under /usr or /opt.
Each systems has it's own scheme on dealing with
soname dependencies, command-namespace dependencies and
package upgrading.
What this come's down to, is that a centralized repository
would need the distros' using it to be of the same
base-system-schema. And if that would be so, then they
would be, actually, ONE AND THE SAME SYSTEM ;D
AND we would have to brainwash EVERY fucking OSS-hacker
to believe into our "one-and-the-only" base-system
in order to make em port their software to our
Nazi-Linux.
The idea is good, but it just would not work.
Like I said earlier, in order to make OSS scene co-operate,
you would have to be GOD, and throw all nay-sayers
to burning hells. Got it?
-
Oh for fuck sake. I'm after typing up a fecking huge reply in Firefox and just lost it due to middle-clicking outside the textbox. :mad:
Can't help but notice that this is a form of computer fascism.
You are wrong. I wouldn't force this system on anyone.
Consolidating and centralizing power/packages leads to dependence and inefficiency.
I get the dependence part. You could say that about anyone or anything that works together with someone or something else.
Previously I depended on Microsoft, then the Mandriva developers, then X, then Y. Which would you trust more though, out of them or my system? That's the important thing.
I don't get the ineficient part. Please elaborate on that. The main pro of my system, from what I can see, is that it makes our curently very inneficient system as efficient as possible. Curently, when there is a bug in some package, say, zlib, FreeBSD, Debian, Gentoo etc. are ALL working on a DIFFERENT patch and apply it to their own repositories. Inneficient.
Overall, Linux is not developed this way
If it was it would be an efficient system and I could not dream of improving upon it, and this discussion would not be taking place.
the community will resist your attempts to steer it toward some sort of homogenization.
If you think that I intend "to steer it [the community] toward some sort of homogenization", then you are mistaken.
The "do what you like" marketing theory has been put to the test, and actually seems to produce quality products.
So you believe that it's that "do what you like" "marketing theory" that is the reason we have such high quality free software? I believe otherwise.
If you start making everybody do the same thing, that's Microsoftism.
If I force them to, then maybe. I'm not gonna force anyone to do anything, so don't compare me to them fucking gaylords please.
The beautiful thing about standards is that there are so many of them!
And what have I been thinking about doing the last while? Deleting standards? Is that what you think I intend on doing?
Creating standards, maybe.
That is not possible.
In which case it will be abandoned as soon as all hope is lost.
You see, every Linux-distro has it's own base-system.
Each piece of software is uniquely tailored for
this base-system, statically --prefixed under /usr or /opt.
Each systems has it's own scheme on dealing with
soname dependencies, command-namespace dependencies and
package upgrading.
Whether these facts are a good or bad thing for the different distributions is arguable. Anyhow, like I've said before:
The distributors could even compile the "universal packages" for their users, and package them in RPM or DEP format and put them into their own repository. They would still gain from faster bug and security fixes. The only thing that would be missing is the users control over which patches are in use (which might cause issues for users of certain (noob) distributions). But it would have it's benefits.
Maybe that way ^^ should be the standard, but that's not exactly up to me (whover adopts it will define which method is 'standard').
What this come's down to, is that a centralized repository
would need the distros' using it to be of the same
base-system-schema. And if that would be so, then they
would be, actually, ONE AND THE SAME SYSTEM ;D
Did you miss the whole patches bit? And the whole distriputors-may-compile-own-packages bit?
You appear to not be understanding much of anything TBH. How did you cope with the can-be-shared-between--different-OSes bit? "SAME SYSTEM" yea fucking right.
AND we would have to brainwash EVERY fucking OSS-hacker
to believe into our "one-and-the-only" base-system
in order to make em port their software to our
Nazi-Linux.
No brainwashing. What I had in mind, is educating them and then letting them decide for themselves. But whatever.
The idea is good, but it just would not work.
That's what you think.
Like I said earlier, in order to make OSS scene co-operate,
you would have to be GOD, and throw all nay-sayers
to burning hells. Got it?
I'd be glad to prove you wrong. But wait, already done. They are co-operating, just not good enough.
Anyhow. This system I have in mind. I see nothing but benefits it could bring. Better freedom (user chooses what patches are applied. Distributors may use the universal repository to compile own binary packages for use by it's distro's users. The user may not need to know of the existance of the universal repository). Better convenienve (all source code and patches in the same repository. Can be compiled easily and cleanly (with the right patches)). Better cooperation and inherently, efficiency.
-
Perhaps something like this could work. Here's what you would need to do, I think. Have your distribution system work like php. Then the source code to all these programs gets dropped into the database. When my computer running FC4 with stops by to pick up the latest release of transcode, the package manager looks at my system, and determines what flags are required to create a package custom-suited to my needs. The package manager then gives these requirements to the distro system, which produces a package custom-fit for me. It would also store a compressed copy of the package in the database, just in case someone else with similar requirements comes for the package.
In this system, packages are built on the fly based on distro. So anyone using Fedora, Gentoo, YDL, SuSE, Debian, or some other Linux could get a package from it.
Of course, this is rougher than it sounds. Basically, the package manager client is handing the distro system configure and compiler flags, and the system then builds an rpm (for example) with those criteria.
Anything else might seem like forcing a standard. I think that being able to choose between apt, yum, rpm, yast, up2date, slapt, and others is part of what makes computers so cool - it takes all kinds. Providing an efficient and simple way to get their packages, well that's fine.
(much of my last post was political hooey, although I do think Slackware is excellent proof that dollar capitalism and market pressure are not necessary to make a quality free product. Patrick Volderking does it because he loves it, and everyone benefits from his love. If only cars and keyboards were made that way!)
-
Perhaps something like this could work.
There is hope!
Here's what you would need to do, I think. Have your distribution system work like php. Then the source code to all these programs gets dropped into the database. When my computer running FC4 with stops by to pick up the latest release of transcode, the package manager looks at my system, and determines what flags are required to create a package custom-suited to my needs. The package manager then gives these requirements to the distro system, which produces a package custom-fit for me. It would also store a compressed copy of the package in the database, just in case someone else with similar requirements comes for the package.
Sounds good.
In this system, packages are built on the fly based on distro.
Hmm hmm... I dunno about that TBH. Although, they could provide patches for each and every package to make it compile exactly correctly for their distribution. Which I probably would've needed anyhow. In which case, such a system would (read: should) be piss easy to implement. Wouldn't even need to use the precous resourses of the core repository server(s).
Anything else might seem like forcing a standard. I think that being able to choose between apt, yum, rpm, yast, up2date, slapt, and others is part of what makes computers so cool - it takes all kinds. Providing an efficient and simple way to get their packages, well that's fine.
Well, the raw core repository will still be open for reading by people like moi. And there'd always need to be some easy way to get packages from that core repository even for the distributers to get. Making the packages simple to compile (As in, straightforward like './configure && make && make install') is one goal. The distribution-specific patch for every-single package is a requirement for that. Althogh.. maybe it could be worked around..... Like, the patch used by Fedora for gzip would be pretty similar to the patch used by Fedora for bzip2 and tar and binutils and coreutils, but that'll need to be looked into. What all is different 'tween distros? (--prefix, and I know little more (probably --manpagedir and friends). Then there's library stuff that I know nothing about.).
If automation worked (as in './configure && make && make install' worked flawlessly all the time on every major distro) this system could be classic. Then that web-based thing would be possible, as well as 'upkg tar' on every single distro, to get the tar source code, apply whatever patches you select (or have an -auto option) , compile and install.
Anyhow, I spent the last 3 hours typing this out (I took my time, was browsing and stuff while doing it.). It's not necesarily completely complete, I've got more to add I think, it goes into quite alot of detail...:
The universal package repository contains all the source code, untouched. The exact version of the source code in the repository is the exact version of the source code as retrieved from the package author (usually from the package's website). Once the source code is in the repository, it is never modified. Instead, patches are stored in a different directory and applied before the source code is compiled. This provides for added flexibility and freedom, because whoever is compiling the package (usually a distributor or user) has the added freedom of choosing which patches get applied to what they install.
Patches will be given a number used to determine their importance, as evaluated either by the package maintainer or a privileged group of individuals (who have obviously gained their privileges. The main people fitting into this category would be security experts and the like.). Distributors, who are generally expected to provide frontends to the official command-line tools used to access the core repository, may override the patches importance value. They could also submit recommendations to the maintainers of the package about the patch importance value.
If the maintainers are iresponsible, someone may contact the core maintainers who have the power to remove package maintainers from their duties and add replacements for them. When the user tries to compile or download a package from the core repository, they may use either the official command-line tools, or the frontend that usually comes with the distribution. Either way, they will be given a list of available patches, as well as their descriptions, importance value (either directly from the core repository or from the distribution's overrides (only available when using the distribution frontend, or any frontend using an updated distribution-specific settings (likely retrieved from the distribution website.).).
There is one rather special patch, obey_uni-pkg-standard.patch. This patch usually only patches the configure (TODO: learn about and probably mention Makefile.in and friends here, assuming they are relevant (which I _think_ they are)) script provided by most packages. It makes the package obey the uni-pkg standard for installing packages. The uni-pkg standard has yet to be defined, but by the time this system gets implemented, assuming it does get implemented, we expect that this standard will be clearly defined. It will only be provided for packages that do not already obey the standard, and probably not even that. Distributors are expected, if they offer source packages to their users, to provide a similar patch for their distribution setup in a distribution-specific folder of the repository. This folder should hold absolutely nothing else.
Whenever a bug is found, a patch is made by the distributors or others (who all operate together), and sent to the package maintainers for inclusion in the repository. When the package maintainers add it to the repository, they give it a very high importance value, especially if it's a fix for a security bug. When users update that package, they will get, possibly among others, this patch and it would be applied to the source code and the package rebuilt and reinstalled. Distributors could automate this process in frontends. Anyone installing the package latter on will see that it is an important patch and will (usually) include it when choosing which patches to apply to the source code. There may be a sub-directory of the patches directory of each package for storing experimental patches, purely for testing purposes.
The original package creators are more than welcome, and recommended, to use the patches from the repository, and include some of them for a next release. Whenever a new version of a package is released, the source code is added as an entirely new package to the repository with a fresh and empty patches directory. Any patches from previous version still relevant may be copied across after optionally being modified. Now, whenever a user updates a package, they will be told about the newer version, and most likely will chose to download it instead (they will be recommended to), apply whichever patches are available and appeal to them, compile and install. The older version would be uninstalled also. Distributors may disallow updating certain packages for whatever reason, but only if the user uses their frontend.
That's source packages. Source packages have their advantages and their disadvantages. As does binary packages, discussed now. Binary packages, officially, are not supported. However, the repository stores all the source code and the patches. So, the distributors may compile the source packages, package them under their own package format and distribute them to their users through their own repositories. Tools are likely to be built to automate this purpose, though they will not officially be supported.
-
Wow, this is soooo off topic.
I think the system could be even easier than that. I've never had to apply a patch before, and I think your patching system might be avoidable. I use apt for my packages, and instead of releasing patches, they release minor or micro version releases. Like, if foo-1.4.5 gets a really small tweak, it comes as foo-1.4.5-a or something. The apt system just kills the old one and installs the new one. So instead of having a complicated patch system, perhaps a micro-versioning system would be more efficient.
An example of how things could work:
Let's say I want to install transcode-1.0.0. Here's the actual configure line I used when installing transcode-1.0.0b:
% ./configure --enable-mmx --enable-sse --enable-sse2 --enable-freetype2 --enable-lame --enable-ogg --enable-vorbis --enable-theora --enable-libquicktime --enable-a52 --enable-libmpeg3 --enable-libxml2 --enable-mjpegtools --enable-imagemagick --with-libavcodec-includes=/usr/include/ffmpeg
% export CFLAGS="-O2 -fomit-frame-pointer -mmmx -msse -mfpmath=sse"
Instead of all this hassle (which I actually kinda enjoy), there should be some kind of intelligent program which will bring up a dialogue asking me what options I am interested in, and recognize what options I have resources for. Let's say I don't have libtheora installed. Then the program says "Get and enable theora support?" and then maybe have an explanation of what theora is. If I say yes, then it writes --enable-theora to a config script. Of course the configure script already has the personalized stuff I need in it, like hostname, arch, and all that crap. Then it gets the source and builds a package in via my packaging system, and installs it. As an option, I can store the package locally, or delete it after installation. Whether I delete it or not, the configure info is kept, so replacing the package is easy enough.
You know what, this is starting to sound like not much more than a giant CVS system. Except you don't give the code back after you check it out. Like a library where you get to keep the books. I bet all the technology to do this could actually be scraped out of some existing things, like cvs, curl, doxygen, autoconf, and automake, for example.
Just a thought. I don't even know if we're talking about the same thing. What I envision is a system that devlivers code to the client, who then personalizes it. No need for developers to waste time on building installation packages, and no need for users to google all day trying to find the right package for their system. Your computer gets the source and knows what to do with it.
It would also be nice to have a smart archive, too. So if I want to get a program that splices mpeg movies together, it will recommend one for me. And then get it. Instead of me having to read untold pages of documentation before finding out that mpgtx is the program I want.
-
I think the system could be even easier than that. I've never had to apply a patch before, and I think your patching system might be avoidable.
'patch -Np1 -i ../patches/fix-whatever.patch', simple as that. And it'd be automated. It'll ask what patches you want in, then it'll path the source code, then it'll compile, then it'll install.
I use apt for my packages, and instead of releasing patches, they release minor or micro version releases.
Patches are staying. Distributions may use microversions in the packages in their repositories, using the patches they like from the universal repository.
So instead of having a complicated patch system, perhaps a micro-versioning system would be more efficient.
Patches are more efficient and less complicated. I dunno how micro-versioning could possibly work in this system, unless the distributors do it with their packages (easy).
An example of how things could work:
Let's say I want to install transcode-1.0.0. Here's the actual configure line I used when installing transcode-1.0.0b:
% ./configure --enable-mmx --enable-sse --enable-sse2 --enable-freetype2 --enable-lame --enable-ogg --enable-vorbis --enable-theora --enable-libquicktime --enable-a52 --enable-libmpeg3 --enable-libxml2 --enable-mjpegtools --enable-imagemagick --with-libavcodec-includes=/usr/include/ffmpeg
% export CFLAGS="-O2 -fomit-frame-pointer -mmmx -msse -mfpmath=sse"
Instead of all this hassle (which I actually kinda enjoy), there should be some kind of intelligent program which will bring up a dialogue asking me what options I am interested in, and recognize what options I have resources for.
That could be added into the client, I think.
You know what, this is starting to sound like not much more than a giant CVS system.
It is alot like CVS, but it is not the same. We couldn't use CVS in the repository, because you wouldn't be able to chose which patches get applied.I bet all the technology to do this could actually be scraped out of some existing things, like cvs, curl, doxygen, autoconf, and automake, for example.
Alot of it will be.
Just a thought. I don't even know if we're talking about the same thing. What I envision is a system that devlivers code to the client, who then personalizes it. No need for developers to waste time on building installation packages, and no need for users to google all day trying to find the right package for their system. Your computer gets the source and knows what to do with it.
Automation will be possible, but because there are so many distributions, they each need to provide a patch for each package to make it compile properly for _their_ system. Then the client can do the rest easily.
It would also be nice to have a smart archive, too. So if I want to get a program that splices mpeg movies together, it will recommend one for me. And then get it. Instead of me having to read untold pages of documentation before finding out that mpgtx is the program I want.
I'm sure that could be added to a frontend or something.
-
JUST REMEMBER:
I do NOT want any non-base-system spesific into my /usr
-directory, or I will sue your ass to the highest court ...
if possible ;)
And there must be a way to determine if I want ONLY the
runtime libraries/binaries installed, so that the
development includes, m4-macros, pkgconfig entries are
installed ONLY if I want to!
Make the system behave so, that these software are installed
under /opt/unipkg/ -hierarchy.
For easy maintenance, each package should be installed
in it's own, isolated directory, eg. /opt/unipkg/ .
Each app can then be launched either separately from it's
directory, or symlinks made to it's binaries. These symlinks
would be stored to, for example, /usr/local/bin.
A version control mechanism MUST be provided. I want to
be able to decide EXACTLY what version of development
headers I want to use for my project.
This could be done quite simply by installing the packages'
binary images into /opt/unipkg//,
and install includes/pkgconfig entries/m4-macros into
/opt/unipkg// .
The would be the Real version of the package,
eg. 1.2, and the would be the version
of a compatible interface, like 1. Then we could determine,
that the development files in /opt/unipkg//1
are compatible with the runtime binaries in directories
/opt/unipkg//1.x ...
And if the package uses some other versioning scheme, there
should be some other mechanisms to resolve compatibilities ...
but deterministic versioning support is a must!
A smart ldconfig manager is also needed, so that each library
is installed in it's own directory, and the directory entry is
then added to /var/unipkg/ldd/ld.so.conf, parsed with
ldconfig to a temporary /var/unipkg/ldd/ld.so.cache,
which is used when launching Unipkg-spesific applications.
When installing a package with some runtime libraries,
the sequence would go:
check if the packages /opt/unipkg///lib -path is in the file /var/unipkg/ld.so.conf, and if not, add it there
ldconfig -f /var/unipkg/ldd/ld.so.conf -C/var/unipkg/ldd/ld.so.cache
To launch a unipkg app, the command sequence could be
something like this:
ldconfig -N -X -f /var/unipkg/ldd/ld.so.conf -C/var/unipkg/ldd/ld.so.cache && /opt/unipkg/MyApp/bin/myapp
Well, this is my idea on how package management SHOULD
be done. I really hate the way modern distros spread their
libraries/binaries/whatever under /usr -hierarchy.
It is totally chaotic ... it just sucks!
Anyways, I really hate these defects in the current
GNU/Linux distributions:
- no support for EXACT development component versioning,
eg. I have no EASY way to determince exactly which interface
version of a library I wish to use during program linking
- no way to dynamically relocate package, since all
software is statically prefixed to /usr ... this is
a fucking retarded way to install software. What is
so deadly wrong in installing each software into it's
own, isolated directory structure?
- ldconfig has many options, which allow dynamic relocation
of libraries (eg. each lib CAN be in it's own directory),
but these options are not used ... pff
I started ranting again, but I do it for the sake of the
whole OSS scene. They do not know how wrong they do things ...
-
I started ranting again, but I do it for the sake of the
whole OSS scene. They do not know how wrong they do things ...
All that stuff you describe could be used in a seperate package manager, like RPM or whatever. Someone can package the source ('universal') packages into and RPM and distribute that. And if RPM doesn't suffice, they could use/invent their own package manager.
I installed FreeBSD yesterday (again), it's ports collection is very nice, and to my surprise, it *does* used patches to make the changes to the original code! And then after that, all it is is 'make && make install', and the package is compiled and installed. Bloodey brilliant.
So yea, nothing innovative here.
-
On the topic of package management, this project (http://programming.newsforge.com/programming/05/07/01/211226.shtml?tid=140) seems quite interesting.
-
On the topic of package management, this project (http://programming.newsforge.com/programming/05/07/01/211226.shtml?tid=140) seems quite interesting.
If grandma can't install an rpm package, why do we want to make it so she can install from source?
I like Linux like it is, really, and I don't want anything to be dumbed down.
-
If grandma can't install an rpm package, why do we want to make it so she can install from source?
Erm, I dunno about yer grandma but most users do know how to install RPMs on an RPM based distro (e.g. Mandriva). There's graphical frontends all that now.
I like Linux like it is, really, and I don't want anything to be dumbed down.
I like the way it works, to an extent. If I had the chance, I'd make some changes. But still, it's working alright.
What do you mean by "dumbed down" though?
-
By dumbed down, I refer to the cliche "Make something foolproof and only a fool will want to use it". When I want the software to hold my hand every step of the way, I'll switch to Windows. I fear that a GNU Installer would make it difficult to build from source by hand, which takes away some of the power. A good example is gmencoder. Yes, it is simple to use. But it can't do half the things that CLI mencoder can do. I'd rather have the power than the friendliness.
(which is why I think OSX is so great - you can choose to browse the lazy foolproof OSX, or pop open a terminal window and raise hell)
Let's try to prevent this from happening to Linux:
(http://www.triple-bypass.net/download/hatewindows.png)
-
By dumbed down, I refer to the cliche "Make something foolproof and only a fool will want to use it". When I want the software to hold my hand every step of the way, I'll switch to Windows. I fear that a GNU Installer would make it difficult to build from source by hand, which takes away some of the power. A good example is gmencoder. Yes, it is simple to use. But it can't do half the things that CLI mencoder can do. I'd rather have the power than the friendliness.
I don't think you have much to worry about.
-
How about we make the packages normal ISO -images?
The packages would contain some XML-files, who would define
the sonames and other required interfaces (services, binaries etc) ... and these files would ALSO define the URL:s where
one could download the needed packages. Like bittorrent or
something.
And if an ISV want's to make a retard-proof click'n'pray
application, he would just include ALL the
libraries/binaries that are not defined in LSB.
Then in the target-distro a user could simply click the
ISO, mdm-platform daemon would loop-mount the ISO,
generate a sandbox (with sonames, ldcache, binary assignements
etc) and Run The Damn thing! =)
This idea I got by browsing some developer comments from
the Darwind project. Damn those Apple guys made a
fine job with that Frameworking system.
Why in the fuck would you want to use ISO images, they do not support real permissions, they do not support lots of things, including tricky filenames and other things, it has a million limitations, ISO images is the worst idea I have EVER heard for a packageing system. I often wonder why the hell such a backwards format is so popular for CDROM and DVD distribution (I usually use UDF myself). Tarballs are fine for package distribution, and package managment systems are often using that.
-
By dumbed down, I refer to the cliche "Make something foolproof and only a fool will want to use it". When I want the software to hold my hand every step of the way, I'll switch to Windows. I fear that a GNU Installer would make it difficult to build from source by hand, which takes away some of the power. A good example is gmencoder. Yes, it is simple to use. But it can't do half the things that CLI mencoder can do. I'd rather have the power than the friendliness.
(which is why I think OSX is so great - you can choose to browse the lazy foolproof OSX, or pop open a terminal window and raise hell)
Let's try to prevent this from happening to Linux:
(http://www.triple-bypass.net/download/hatewindows.png)
Ever thought of just clicking "Take no action" and checking the "Always do the selected action" box.
OMG GNOME BRINGS UP A DIAAALOUGGEEEE THAT IS SOOOO ANNOYING WHEN I PLUG MY CAMERA IN ALREADY. OMG THAT IS SO MICROSOFT.
-
Ever thought of just clicking "Take no action" and checking the "Always do the selected action" box.
OMG GNOME BRINGS UP A DIAAALOUGGEEEE THAT IS SOOOO ANNOYING WHEN I PLUG MY CAMERA IN ALREADY. OMG THAT IS SO MICROSOFT.
KDE rules
-
KDE sucks. Gnome rules.
-
KDE sucks. Fluxbox rules.
^ fixed.
-
Fluxbox sucks. XFCE rules.
-
Fluxbox sucks. XFCE rules.
5'd.
-
WindowMaker rules :D
-
KDE! \o/
-
Why in the fuck would you want to use ISO images, they do not support real permissions, they do not support lots of things, including tricky filenames and other things, it has a million limitations, ISO images is the worst idea I have EVER heard for a packageing system. I often wonder why the hell such a backwards format is so popular for CDROM and DVD distribution (I usually use UDF myself). Tarballs are fine for package distribution, and package managment systems are often using that.
OKAY no ISO-format then.
But some mountable image format is needed. Tar/*zip packages can not be mounted, and so one can't make retard-proof software packages, like the .Application objects in Mac OS X.
Maybe UDF instead of ISO, then you'll be content with my ideas?
And yeah, i've been told I am a fucking asshole plenty of times, but i gotta tell ya this: If GNU/Linux systems are to be adopted by the mainstream, then they need an easier way to install and run software. Currently there are NO standards that would make it possible to create a package, download it in any distro, install and run the program. And for this I blame the chaotic development model the GNU/Linuxed OSS-scene has.