Discussion:
Plan for 11.2?
(too old to reply)
Birger Kollstrand
2008-12-17 14:25:27 UTC
Permalink
Hi,

I'm wondering if there is a plan for the 11.2 distribution? It's not
when it's available but more what Novell want's to achieve with it.

There are several important areas that I would like to help out with:
1. vastly reduced size of the core distro and easy setup for addons.
Take out all but core KDE (and Gnome, but I'm a KDE'er so....).
2. mobility. Moving between networks, WLAN, LAN, GPRS , Bluetooth ++.
Suspend/Resume, auto setup of VPN when moving outise the home LAN.
Roaming profile/autosyncing of data with server with Unison , iFolder
or similar.
3. syncing data with mobile phones
4. VPN client and server config. If so, choose one VPN server variant
and support it 100%. (I like OpenVPN :-) )
5. Home. Easy home server setup in one Yast module for useability.
Fileserver, printer server, music server, movie server, picture
server.
6. Connectivity from different types of clients, especially Upnp and
DLNA compatibility. Windows, Mac and Linux client access to printer
and file of cource and also wizards or simmilar to aid users in
getting connected on the client machines. (Yes this might involve
windows/mac/openSolaris SW)
7. media syncing with hanheld devices like iPod's, media phones,
Creative ZEN. Also conversion of media to the hanhelds preferred
format without the user needing to set up a complex structure.
8. officially supported game archive, maybe also with commercial games
available like Opera/flash etc. is now available.
9. availability on new platforms. Specific focus on netbooks.
10. Default media made for USB key as well as DVD.
11. a standard yast module for server setup of net install of
machines. This will aid in testing when there is a lot of machine
variants to test. Easy restore to original config would be good also.
(All server based :-) ) I have 15+ different computeres that I can
test with , but it's to much work going back and forth now so I just
use 2.

Of course, if I do not make any sense (which often happens) I will be
glad to discuss the points.

And again, these are all parts I will gladly assist in testing. If
some of it is made in Python I will try to assist with code also, but
be warned. It's better to keep me as tester!

Kind Regards
Birger
Rob OpenSuSE
2008-12-17 14:47:12 UTC
Permalink
Post by Birger Kollstrand
I'm wondering if there is a plan for the 11.2 distribution? It's not
when it's available but more what Novell want's to achieve with it.
1. vastly reduced size of the core distro and easy setup for addons.
Take out all but core KDE (and Gnome, but I'm a KDE'er so....).
Have you tried the Net Install CD? Even X is optional, whilst the
defaults may include stuff like AppAmor that you might not use. The
Live CD's are copying over a 'typical' package set, for a simple
starting point to those confused by choices.
Post by Birger Kollstrand
10. Default media made for USB key as well as DVD.
Why not a LIve USB using Kiwi? It may not have the options of the Net
or DVD install, but won't it be "ready to go" for more users.
Post by Birger Kollstrand
11. a standard yast module for server setup of net install of
machines. This will aid in testing when there is a lot of machine
variants to test. Easy restore to original config would be good also.
If you have the DVD image, then you can loop back mount it to
/srv/openSuSE just export it via NFS and then boot the net install
client.

/etc/fstab(5) :

/dl/openSuSE/openSUSE-10.3-GM-DVD-i386-iso/openSUSE-10.3-GM-DVD-i386.iso
/srv/openSuSE auto loop,ro 0 0

/etc/exports :
/srv/OpenSuSE *(ro,root_squash,sync,no_subtree_check)


Restoring the config of client machines seems like a rather tall
order, given that they could have any distro, Windows, Solaris or BSD
on them.

Do you mean that you want to boot the client machine, then install
into some set configuraton eg) take all defaults or something? Or are
you expecting to plan for future testing by having swap, /boot and /
partitions avialable?
Cristian Rodríguez
2008-12-17 15:03:51 UTC
Permalink
Post by Birger Kollstrand
1. vastly reduced size of the core distro
Ok, how you suggest we do that ? is 250/300 MB too big for you ?
Post by Birger Kollstrand
and easy setup for addons.
Easier than "one click install" eh ?
Post by Birger Kollstrand
4. VPN client and server config. If so, choose one VPN server variant
and support it 100%. (I like OpenVPN :-) )
A yast module to configure openVPN server, and fix the
networkmanager-openvpn support for client will be really nice indeed.
--
"We have art in order not to die of the truth" - Friedrich Nietzsche

Cristian Rodríguez R.
Platform/OpenSUSE - Core Services
SUSE LINUX Products GmbH
Research & Development
http://www.opensuse.org/
Stanislav Visnovsky
2008-12-17 19:50:00 UTC
Permalink
On Wednesday 17 December 2008 15:25:27 Birger Kollstrand wrote:

[snip]
Post by Birger Kollstrand
11. a standard yast module for server setup of net install of
machines. This will aid in testing when there is a lot of machine
variants to test. Easy restore to original config would be good also.
(All server based :-) ) I have 15+ different computeres that I can
test with , but it's to much work going back and forth now so I just
use 2.
What is missing in yast2-instserver? And for restoring configuration, you can
use autoyast Create Profile function and then deploy via autoyast.

Stnao
Hans Witvliet
2008-12-17 22:05:18 UTC
Permalink
Post by Stanislav Visnovsky
[snip]
Post by Birger Kollstrand
11. a standard yast module for server setup of net install of
machines. This will aid in testing when there is a lot of machine
variants to test. Easy restore to original config would be good also.
(All server based :-) ) I have 15+ different computeres that I can
test with , but it's to much work going back and forth now so I just
use 2.
What is missing in yast2-instserver? And for restoring configuration, you can
use autoyast Create Profile function and then deploy via autoyast.
Stnao
mentioning autoyast....
I noticed (didn't check bugzille yet) but _IF_ you were wise enough to
enable autoyast during software-selection, the "clone-system"-checkbox
at the very end of the installation is not greyed-out.
However if one chooses to enable this, no data is collected and saved
in /root/autoinstall.xml anymore. (checked in B5, RC1) is this solved in
the final version??? It produces something diffrent than when creating a
reference-profile.


Could add some more to the wishlist.
I would like to see some cluster-aware tools. like ovirt or lax (from
teegee.de). Specially SLES would bennefit from it.
ovirt comes from the fedora-boys, while lax is (afaik)
opensuse_11.0/sles10sp2 based.

hw
Stanislav Visnovsky
2009-01-08 14:51:30 UTC
Permalink
Post by Hans Witvliet
Post by Stanislav Visnovsky
[snip]
Post by Birger Kollstrand
11. a standard yast module for server setup of net install of
machines. This will aid in testing when there is a lot of machine
variants to test. Easy restore to original config would be good also.
(All server based :-) ) I have 15+ different computeres that I can
test with , but it's to much work going back and forth now so I just
use 2.
What is missing in yast2-instserver? And for restoring configuration, you
can use autoyast Create Profile function and then deploy via autoyast.
Stnao
mentioning autoyast....
I noticed (didn't check bugzille yet) but _IF_ you were wise enough to
enable autoyast during software-selection, the "clone-system"-checkbox
at the very end of the installation is not greyed-out.
Yes, you need the autoyast code to gather the info.
Post by Hans Witvliet
However if one chooses to enable this, no data is collected and saved
in /root/autoinstall.xml anymore. (checked in B5, RC1) is this solved in
the final version???
I'm not sure, I will check.
Post by Hans Witvliet
It produces something diffrent than when creating a
reference-profile.
No, it does not. The only difference is a set of modules to gather information
from.

Stano
Post by Hans Witvliet
Could add some more to the wishlist.
I would like to see some cluster-aware tools. like ovirt or lax (from
teegee.de). Specially SLES would bennefit from it.
ovirt comes from the fedora-boys, while lax is (afaik)
opensuse_11.0/sles10sp2 based.
hw
Rob OpenSuSE
2009-01-08 15:23:30 UTC
Permalink
For 11.2, what about using filesystem capabilities to reduce the
number of suid executables, in order to reduce the criticality of
security flaws?

Ultrich Drepper has blogged a short example that is usable in Fedora
10 - http://udrepper.livejournal.com/20709.html

LWN Subsriber only content until 2009/01/14 - http://lwn.net/Articles/313838/
More discussion and info on this, if you can read it.

The kernel now uses the filesystem capabilities, and at least the
default & commonest filesystems support extended attributes (xattr's).
Giovanni Masucci
2008-12-18 00:52:55 UTC
Permalink
I'd like seeing 2 things mostly to do:

- when you click on a downloaded rpm from dolphin you get dependency errors
even if the dependencies are in your enabled repository..so you have to
install deps via zypper/yast and then click on the rpm file. This should is
not very easy for the end user...especially if he comes from ubuntu where
gdebi automatically handles this kind of situation.

-kernel 2.6.28 is almost released but 11.2 will probably ship with >=2.6.29.
Kernel 2.6.29 will bring kernel mode setting (flicker free boot with support
for wide screen resolutions, better suspend). Fedora has a new implementation
of graphical boot, that uses kernel mode settings; it's called plymouth and
it's already working in fedora 10. Ubuntu is considering to switch to it for
the next version. I think that opensuse should think about it too. Here's a
link with some more infos and a video of plymouth+kernel mode setting:
http://www.phoronix.com/scan.php?page=news_item&px=Njg3Nw

and here's a page with a summary of fedora's work:
http://fedoraproject.org/wiki/Features/BetterStartup
Stanislav Visnovsky
2008-12-18 07:11:54 UTC
Permalink
Post by Giovanni Masucci
- when you click on a downloaded rpm from dolphin you get dependency errors
even if the dependencies are in your enabled repository..so you have to
install deps via zypper/yast and then click on the rpm file. This should is
not very easy for the end user...especially if he comes from ubuntu where
gdebi automatically handles this kind of situation.
That's a bug.

Stano
Basil Chupin
2008-12-18 09:04:41 UTC
Permalink
Post by Stanislav Visnovsky
Post by Giovanni Masucci
- when you click on a downloaded rpm from dolphin you get dependency errors
even if the dependencies are in your enabled repository..so you have to
install deps via zypper/yast and then click on the rpm file. This should is
not very easy for the end user...especially if he comes from ubuntu where
gdebi automatically handles this kind of situation.
That's a bug.
Stano
Christ!

WHAT is "a bug"? What Giovanni is reporting or Ubuntu automatically
handling the situation?
--
Be nice to people on your way up - you'll see the same people on your way down.
Andreas Jaeger
2008-12-18 09:22:34 UTC
Permalink
Post by Basil Chupin
Post by Stanislav Visnovsky
Post by Giovanni Masucci
- when you click on a downloaded rpm from dolphin you get dependency errors
even if the dependencies are in your enabled repository..so you have to
install deps via zypper/yast and then click on the rpm file. This should is
not very easy for the end user...especially if he comes from ubuntu where
gdebi automatically handles this kind of situation.
That's a bug.
Stano
Christ!
WHAT is "a bug"? What Giovanni is reporting or Ubuntu automatically
handling the situation?
The first one - it should work the same way with zypper and dolphin,

Andreas
--
Andreas Jaeger, Director Platform / openSUSE, ***@suse.de
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
Maxfeldstr. 5, 90409 Nürnberg, Germany
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
Basil Chupin
2008-12-19 04:02:23 UTC
Permalink
Post by Andreas Jaeger
Post by Basil Chupin
Post by Stanislav Visnovsky
Post by Giovanni Masucci
- when you click on a downloaded rpm from dolphin you get dependency errors
even if the dependencies are in your enabled repository..so you have to
install deps via zypper/yast and then click on the rpm file. This should is
not very easy for the end user...especially if he comes from ubuntu where
gdebi automatically handles this kind of situation.
That's a bug.
Stano
Christ!
WHAT is "a bug"? What Giovanni is reporting or Ubuntu automatically
handling the situation?
The first one - it should work the same way with zypper and dolphin,
Andreas
Thank you, Andreas.

Ciao.
--
Be nice to people on your way up - you'll see the same people on your way down.
Stephan Binner
2008-12-21 17:45:34 UTC
Permalink
Post by Stanislav Visnovsky
Post by Giovanni Masucci
- when you click on a downloaded rpm from dolphin you get dependency
errors even if the dependencies are in your enabled repository..so you
That's a bug.
More specifically: https://bugzilla.novell.com/show_bug.cgi?id=459268

Bye,
Steve
ab
2008-12-19 09:05:17 UTC
Permalink
theres already a 11.2 schedule discussion on the other list
<http://lists.opensuse.org/opensuse-project/2008-12/msg00025.html>
Larry Stotler
2008-12-19 14:19:20 UTC
Permalink
Post by ab
theres already a 11.2 schedule discussion on the other list
<http://lists.opensuse.org/opensuse-project/2008-12/msg00025.html>
From: Michael Loeffler
First we talked about July '09 release to come close to an 8 months
release cycle. But KDE 4.3 is scheduled for release on June 30th and
probably an OpenOffice.org release will be out end of June as well -
both wouldn't make it into a July openSUSE 11.2. Therfor we're now
thinking about a September release. Beside of getting the most
current OpenOffice and KDE in
I asked for a delay for KDE 4.2 and was shot down because of the 6
month cycle. Now they are talking about a 9 month and what I had
asked for..........
Rob OpenSuSE
2008-12-19 15:11:49 UTC
Permalink
Post by Larry Stotler
Post by ab
theres already a 11.2 schedule discussion on the other list
<http://lists.opensuse.org/opensuse-project/2008-12/msg00025.html>
I asked for a delay for KDE 4.2 and was shot down because of the 6
month cycle. Now they are talking about a 9 month and what I had
asked for..........
Least they're not talking about about releasing in sync with the
kernel, every 3 months!

The other thread is more about schedule issues and marketing, rather
than what can be developed for 11.2.

Now I don't really know what the true current state of play is here...

Personally I'd actually like to get away from "Releases" all together,
just have them as 'snapshots' in time, as on the net, you don't have
much choice but to keep relatively current. It's not just because of
security patches, but also things like older browsers not working on
recent websites and such.

But to have something like Factory to be equivalent of Debian
"unstable", and then 11.1 being "current" to be replaced by 11.2, with
11.1 getting the "stable" moniker and living on as SLED/SLES; then
upgrades have to get major attention during testing and package
building.

Whilst on forum, some are advocating "zypper dup", those using that to
go from 11.0 -> 11.1 are hitting problems with pam, and who knows what
else that isn't immediately noticeable?

The whole download an iso, for release testing labelled -alpha or
-beta, doing fresh install, and then trying to test it, tends to lead
to "playing" with the system, rather than really working with it.
peter nikolic
2008-12-19 20:01:54 UTC
Permalink
Post by Larry Stotler
. Now they are talking about a 9 month and what I had
asked for..........
MAybe at last someone is seeing sense 9 months is a bit more like it that way
the whole world gets a much more stable release with less bugs causing heated
exchanges on the list now we just need to get this darn KDE4 distribution
stopped till it is at the very least the equal of KDE 3.5.7 or 9

Pete
--
SuSE Linux 10.3-Alpha3. (Linux is like a wigwam - no Gates, no Windows, and an
Apache inside.)
Herbert Graeber
2008-12-19 20:57:18 UTC
Permalink
Post by Larry Stotler
Post by ab
theres already a 11.2 schedule discussion on the other list
<http://lists.opensuse.org/opensuse-project/2008-12/msg00025.html>
From: Michael Loeffler
First we talked about July '09 release to come close to an 8 months
release cycle. But KDE 4.3 is scheduled for release on June 30th and
probably an OpenOffice.org release will be out end of June as well -
both wouldn't make it into a July openSUSE 11.2. Therfor we're now
thinking about a September release. Beside of getting the most
current OpenOffice and KDE in
Why?
Post by Larry Stotler
I asked for a delay for KDE 4.2 and was shot down because of the 6
month cycle. Now they are talking about a 9 month and what I had
asked for..........
Yes, you asked for it far after the schedule has been fixed. And even, when
such a delay should have been considered, It would have been really impossible
to switch to KDE4.2 that fast over christmas holidays. Maybe, those KDE
wirzards working at SUSE could have made it, but it is impossible to be sure
about that.

Now we are lucky, that the release of KDE4.3 (and some other important
releases, too) and openSUSE 11.2 will be a good match, even if KDE4.3 would be
released a little bit later than planned.

Herbert
Birger Kollstrand
2008-12-20 13:23:25 UTC
Permalink
Hey!

Thanks for hijacking the thread when it was specifically started with:

"It's not when it's available but more what Novell want's to achieve with it."

So I'd love if anyone would care to discsuss where we want to go,
rather than when.

How tha h... should we now when when nobody has a clue about what's going in?

and as a side note, I think we sould care LESS about KDE 4.1 vs 4.2 or
Gnome 2.x. It's what the users can get done withn it that counts. Both
current major desktops are usable for years to come.....

Maybe a new thread should be started or is factory not the right place
to discuss contents?

cu
Post by Herbert Graeber
Post by Larry Stotler
Post by ab
theres already a 11.2 schedule discussion on the other list
<http://lists.opensuse.org/opensuse-project/2008-12/msg00025.html>
From: Michael Loeffler
First we talked about July '09 release to come close to an 8 months
release cycle. But KDE 4.3 is scheduled for release on June 30th and
probably an OpenOffice.org release will be out end of June as well -
both wouldn't make it into a July openSUSE 11.2. Therfor we're now
thinking about a September release. Beside of getting the most
current OpenOffice and KDE in
Why?
Post by Larry Stotler
I asked for a delay for KDE 4.2 and was shot down because of the 6
month cycle. Now they are talking about a 9 month and what I had
asked for..........
Yes, you asked for it far after the schedule has been fixed. And even, when
such a delay should have been considered, It would have been really impossible
to switch to KDE4.2 that fast over christmas holidays. Maybe, those KDE
wirzards working at SUSE could have made it, but it is impossible to be sure
about that.
Now we are lucky, that the release of KDE4.3 (and some other important
releases, too) and openSUSE 11.2 will be a good match, even if KDE4.3 would be
released a little bit later than planned.
Herbert
--
Rob OpenSuSE
2008-12-20 13:48:28 UTC
Permalink
Post by Birger Kollstrand
and as a side note, I think we sould care LESS about KDE 4.1 vs 4.2 or
Gnome 2.x. It's what the users can get done withn it that counts. Both
current major desktops are usable for years to come.....
Not the opinion of many KDE-3.5.x users, who have not been converted
by KDE4 yet, it's getting better but, 4.3 is going to be an important
release, for KDE. If it's not right again in 4.3, then the community
will want a 3.5 release, that can't be included officially because
noone wants to commit to maintenance for 2+ more years.
Post by Birger Kollstrand
Maybe a new thread should be started or is factory not the right place
to discuss contents?
People need a reason to upgrade, and improvements to rarely used tools
like Installers and Partitioners, won't cut much ice. So your idea of
looking at features is good.

This was addressed to you, so may be discussion can progress if you consider it.
Post by Birger Kollstrand
Post by Birger Kollstrand
11. a standard yast module for server setup of net install of
machines. This will aid in testing when there is a lot of machine
variants to test. Easy restore to original config would be good also.
(All server based :-) ) I have 15+ different computeres that I can
test with , but it's to much work going back and forth now so I just
use 2.
What is missing in yast2-instserver? And for restoring configuration, you can
use autoyast Create Profile function and then deploy via autoyast.
Oddball
2008-12-21 19:55:54 UTC
Permalink
Post by Rob OpenSuSE
Post by Birger Kollstrand
and as a side note, I think we sould care LESS about KDE 4.1 vs 4.2 or
Gnome 2.x. It's what the users can get done withn it that counts. Both
current major desktops are usable for years to come.....
Not the opinion of many KDE-3.5.x users, who have not been converted
by KDE4 yet, it's getting better but, 4.3 is going to be an important
release, for KDE. If it's not right again in 4.3, then the community
will want a 3.5 release, that can't be included officially because
noone wants to commit to maintenance for 2+ more years.
The 4 is getting better, you just have to loosen the ties from the 3,
and let it go..
it is the past already. ;)
Post by Rob OpenSuSE
Post by Birger Kollstrand
Maybe a new thread should be started or is factory not the right place
to discuss contents?
People need a reason to upgrade, and improvements to rarely used tools
like Installers and Partitioners, won't cut much ice. So your idea of
looking at features is good.
The polar caps are almost melted down, so if you want to cut ice, you
have to be quick.
The tools you mention, were very surely up to an overhaul, and they
still have to be tuned, but look much and much better than before.
Installer, partitioner and upgrade tools are the most important of the
distro.
They decide wetter people do- or don't like a distro to work with.
They are, so to speak, the 'Masters of Trust', you are willing to trust
your hardware to.
(in the past there were many discussions about this.)

The handles for new and existing hardware have to be there, and if
possible, simple and understandable.. and Birger, i think you have a
point there in this post ;)

As always webstandards have to be included, how difficult this always
may sound, as also full media support.

As i notice still, is the not always flawlessly working networking and
video hardware drivers...

It would be very nice, if there would be made a breaktrough in these
matters, surely when adding extra time before the next release.
Post by Rob OpenSuSE
This was addressed to you, so may be discussion can progress if you consider it.
Post by Birger Kollstrand
Post by Birger Kollstrand
11. a standard yast module for server setup of net install of
machines. This will aid in testing when there is a lot of machine
variants to test. Easy restore to original config would be good also.
(All server based :-) ) I have 15+ different computeres that I can
test with , but it's to much work going back and forth now so I just
use 2.
What is missing in yast2-instserver? And for restoring configuration, you can
use autoyast Create Profile function and then deploy via autoyast.
--
Enjoy your time around,


Oddball (M9.) (Now or never...)


OS: Linux 2.6.27.7-8-default x86_64
Huidige gebruiker: ***@AMD64x2-sfn1
Systeem: openSUSE 11.2 Alpha 0 (x86_64)
KDE: 4.1.3 (KDE 4.1.3) "release 4.6"
Vincent Untz
2008-12-20 15:27:49 UTC
Permalink
Post by Birger Kollstrand
Hey!
"It's not when it's available but more what Novell want's to achieve with it."
I hope you mean s/Novell/the community/.

Vincent
--
Les gens heureux ne sont pas pressés.
Birger Kollstrand
2008-12-20 16:28:17 UTC
Permalink
Post by Vincent Untz
Post by Birger Kollstrand
Hey!
"It's not when it's available but more what Novell want's to achieve with it."
I hope you mean s/Novell/the community/.
Vincent
Yes, ...... and no.
The community expresses what it wants via the malinglists, wikis and
bugzillas. So that is quite open.

What Novell wnat's is for me quite blank. And I really would like to
know as I do belive that a good product is best made with a strong
comany support
Herbert Graeber
2008-12-20 19:56:56 UTC
Permalink
Post by Birger Kollstrand
"It's not when it's available but more what Novell want's to achieve with it."
[...]
I haven't. Sure. I have answered to a mail of Larry Stottler in this thread

I will see, if I can get enough Information for a kmail bug report.

Herbert
Matt Sealey
2009-01-08 19:21:15 UTC
Permalink
On Wed, Dec 17, 2008 at 8:25 AM, Birger Kollstrand
Post by Birger Kollstrand
Hi,
I'm wondering if there is a plan for the 11.2 distribution? It's not
when it's available but more what Novell want's to achieve with it.
1. vastly reduced size of the core distro and easy setup for addons.
Take out all but core KDE (and Gnome, but I'm a KDE'er so....).
I'd be all for this, and also for something like splitting X.org and
KDE into slightly more fine-grained chunks. In the past, installing
KDE apps (and some GNOME stuff for that matter) is a matter of
grabbing huge swathes of stuff whether you want it or not. If you just
want Marble you had to install the entirity of kdeedu. Now it's all
seperate in 11.1. This is awesome because you can pick and choose -
but the damn patterns mean you have to grab nearly all of it by
default.

More fine-grained patterns

radeonhd and unichrome are seperate video drivers, but otherwise
everything is in xorg-x11-drivers-video (including, on platforms like
PowerPC, a ton of drivers which have never been seen in a PowerPC box,
nor probably ever will be - i740, intel i810, vesa (!!) and vbe and
suchlike. While it doesn't waste too much space (on the order of a
couple of megabytes) compared to usual disk sizes as a whole, most
systems only have one graphics card, and if they have two, it's
usually under the same driver.

Perhaps they could be split into xorg-x11-drivers-video-rare, then the
popular types (radeon, sis/xgi, nv, maybe nouveau at some very later
stage like 13.1 :), and when a user is installing, it can be removed
(just like some stuff is added in stage2) after SaX2 has run and
picked the right card. The X.org autoconfiguration system which allows
the X.org to boot without any configuration for a card specifically,
could simply be expanded to run SaX2 or some replacement, which can go
off and grab the right driver from the package repository, then
initialize it properly.

The same goes for xorg-x11-libs which is a behemoth. I filed a bug
report about it not long ago (since the default build extracts EVERY
*.tar.bz2 in /usr/src/packages/SOURCES with wild abandon, which is
nasty if you installed kernel-source.src) and I understand the
maintainer's pain on maintaining ~30 different packages for each
library; however I do think that it can be done, and Fedora (RPM
based), Debian/Ubuntu and the like all manage to keep them seperately.
I think the advantages of only installing what you need is a good one.
There are some weird conceptual decisions here - Xrender is a seperate
library, but Xcomposite is in the -libs monsterpackage, which is odd
since both are enabled by default by the X server.

(btw is EXA going to become some default in the near future, the
performance benefits really are noticable on radeon cards and even
moreso with a post-11.1 driver since David Airlie put in some extra
work on the EXA acceleration).

I'd also be completely condusive to ditching qt3 - when I installed
GNOME last time, somehow I got this package. I don't think it's a hard
dependency on anything here, but I noticed it in KDE4 too, which is
odd since 11.1 was supposed to have "no Qt3 or GTK apps on the default
desktop". If that's true why install it? The Qt3 compatibility library
was installed too.

With regards to KDE, I like SUSE because it has Kickoff, which is a
lovely little menu, but I have since been poking around in GNOME and
noticed the menu there looking ever -so-slightly lovelier. The
applications list is a nice idea, far better than the weird
click-slide heirarchy of the KDE menu. It also means you could replace
the application launcher; KDE still misses the kind of Netbook-style
support that for instance ume-launcher does on Ubuntu (which is a
direct replacement for the GNOME one). It must also be said that the
Qt4 YaST GUI is fugly at best, compared to the launcher-style menu of
the GNOME UI module which matches the GNOME launcher perfectly.

I know some things you just don't want to copy, but in the end,
switching between desktops gives two completely different experiences,
and sometimes simple is best (which is what GNOME is, to a fault
sometimes).

On the contrary, the GNOME package manager UI is f**king awful.
Everything is so squished and ugly. Adding a package squishes it even
more.. by adding larger rows to the right. This means your list of
packages to install pushes out the list of packages you want to search
for! Can't this be moved to some kind of tab like "Install Queue" or
something, like Kuroo had/has?

http://kuroo.org/screenshots.html

(my screen res is 1280x1024 but I don't like to open all my windows
maximized if I don't need to.. sometimes I want to refer to a document
about which packages I need, while I am picking them..)

The other thing is, I absolutely loathe yast2-bootloader for a lot of
reasons. While a neat idea, it seems to have fallen down as a rather
poorly maintained set of hacks for random different bootloader
solutions, to the point that it abstracts everything into a LILO-like
bootloader system which is really really difficult to actually manage.
Adding new bootloaders is some nightmare; and while it supports some
bootloader methods (like PReP zImage), these aren't enabled for
platforms that need it, and supporting it would mean modifying tens of
files by my reckoning. And then fixing lilo-ppc too (which is nothing
to do with or any part of lilo, sigh) - could some effort be put in to
rework the bootloader management such that bootloaders are treated as
native modules, and adding a new bootloader is a simple matter of
accepting a string of commands (add kernel+initrd, return it's index,
set some settings, arguments on each entry) which outputs a new
configuration and runs any update binaries in-place?

(naive request since, the above is what it does, it just goes through
3 levels of abstraction too many IMO when every bootloader is forced
to act like LILO, you may as well just reduce all bootloader support
to pluggable modules which only accept the data LILO could use, and
perform tasks natively. It would also mean that yast2-bootloader would
only include the code you need on YOUR architecture and for YOUR
bootloader, and not the 5 or 6 alternatives..)
Post by Birger Kollstrand
9. availability on new platforms. Specific focus on netbooks.
See above. More focus in KDE on application launching etc. and
comparable tools to the Ubuntu GNOME stuff would be great.

http://www.freesoftwaremagazine.com/columns/ubuntu_netbook_remix_detailed_explanation

Half of it would be easily implemented in Plasma given the time and developers.
Post by Birger Kollstrand
10. Default media made for USB key as well as DVD.
It actually requires a hell of a lot of setup to get a USB key
bootable, more than you could possibly do by shipping a file. I guess
a little Windows installer would be okay that shoved a bootloader on
there, and copied the NET iso stuff there (or even the full DVD) but
that kind of defeats the object of perhaps grabbing a clean system and
installing SUSE from scratch from a USB key. You'd need an existing
system to do it.. which is the computing equivalent IMO of
waterboarding.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Larry Stotler
2009-01-08 20:35:35 UTC
Permalink
Post by Matt Sealey
I'd also be completely condusive to ditching qt3 - when I installed
GNOME last time, somehow I got this package. I don't think it's a hard
dependency on anything here, but I noticed it in KDE4 too, which is
odd since 11.1 was supposed to have "no Qt3 or GTK apps on the default
desktop". If that's true why install it? The Qt3 compatibility library
was installed too.
The reason that qt3 is loaded is because some of the KDE programs
aren't yet qt4, so you need to have that to run those programs. IF
all you programs are qt4, you should be abel to remove it tho.
Post by Matt Sealey
With regards to KDE, I like SUSE because it has Kickoff, which is a
lovely little menu,
See, I think it sucks and change it to the old KDE menu immediately.
It runs really slow on my older P3 laptop. To each their own tho.

openSUSE and Linux's RAM needs have really gotten out of hand lately.
I was running 10.2 on a PowerMac 7500/G4/700 with 256MB with no
problem. Now I need 512 or it's slow as molasses. Not a good trend
IMO.
Herbert Graeber
2009-01-08 21:07:06 UTC
Permalink
Post by Larry Stotler
[...] , but I noticed it in KDE4 too, which is odd since 11.1 was supposed
to have "no Qt3 or GTK apps on the default desktop". If that's true why
install it? The Qt3 compatibility library was installed too.
The reason that qt3 is loaded is because some of the KDE programs
aren't yet qt4, so you need to have that to run those programs. IF
all you programs are qt4, you should be abel to remove it tho.
Even parts of YaST still use qt3 with kde3 themes and therefore kdelibs3.
Post by Larry Stotler
[...]
Herbert
Stanislav Visnovsky
2009-01-09 08:47:22 UTC
Permalink
Post by Herbert Graeber
Post by Larry Stotler
[...] , but I noticed it in KDE4 too, which is odd since 11.1 was
supposed to have "no Qt3 or GTK apps on the default desktop". If that's
true why install it? The Qt3 compatibility library was installed too.
The reason that qt3 is loaded is because some of the KDE programs
aren't yet qt4, so you need to have that to run those programs. IF
all you programs are qt4, you should be abel to remove it tho.
Even parts of YaST still use qt3 with kde3 themes and therefore kdelibs3.
The only part is YaST Control Center, which did not make it to Qt4 yet. But
it's a priority for us to move it - you can see a lot of discussions around
what to do on that front. But even current control center does not need
kdelibs at all.

Stano
Putrycz, Erik
2009-01-08 22:28:49 UTC
Permalink
Post by Larry Stotler
openSUSE and Linux's RAM needs have really gotten out of hand lately.
I was running 10.2 on a PowerMac 7500/G4/700 with 256MB with no
problem. Now I need 512 or it's slow as molasses. Not a good trend
IMO.
I don't think this should be a priority for the main distrib. At least a
gig of RAM can considered as a requirement these days.
For less, something like an embedded distrib would be more
appropriate...

Erik.
Rob OpenSuSE
2009-01-09 00:02:17 UTC
Permalink
Post by Putrycz, Erik
At least a
gig of RAM can considered as a requirement these days.
For less, something like an embedded distrib would be more
appropriate...
Why? With KDE4 plus Firefox 512MB RAM was perfectly adequate in my
testing. In fact it is very hard to get the VM to use swap space at
all, without increasing the default value of "swapiness"; which I
think shows the default is poorly chosen.

I didn't think 256MB RAM was so awful either under i586, the VM system
performed reasonably well (Pentium 1.6Hhz CPU system with horriible
Rambus RIMM memory Intel tried to foister on market at time of P4's
intro, making an inexpensive upgrade harder to find). I did use Net
Install, and prepare swapspace once I'd got it booted however.
Ideally I'd find an extra compatible RDRAM, but it was alright. Using
XFCE or LXDE would be more pleasurable, I think than GNOME or KDE4 if
you're going to run applications. Perhaps my experience was better,
because I added the KDE4 basics, and may have avoided stuff like
nepomuk and beagle, as I had limitted space for /.

There's no reason to tell ppl to not use old machines, just because
you'ld not spec out such a machine for desktop. 256MB RAM is plenty
for many SOHO server applications.
Larry Stotler
2009-01-09 02:57:09 UTC
Permalink
On Thu, Jan 8, 2009 at 7:02 PM, Rob OpenSuSE
Post by Rob OpenSuSE
Why? With KDE4 plus Firefox 512MB RAM was perfectly adequate in my
testing. In fact it is very hard to get the VM to use swap space at
all, without increasing the default value of "swapiness"; which I
think shows the default is poorly chosen.
Agreed. Those who make comments like that evidently don't have an
older system, or if they do, don't use it. I regularly run on older
hardware with few issue. Granted, it's not as fast as my 64bit
machines, but why should I spend $$ I don't have to upgrade? The
economy is not that great now, and it can be more expensive to upgrade
an older system than a newer one.
Post by Rob OpenSuSE
There's no reason to tell ppl to not use old machines, just because
you'ld not spec out such a machine for desktop. 256MB RAM is plenty
for many SOHO server applications.
I run a server with 128MB. It don't need much to just serve up files.
Stanislav Visnovsky
2009-01-09 08:49:37 UTC
Permalink
Post by Larry Stotler
On Thu, Jan 8, 2009 at 7:02 PM, Rob OpenSuSE
Post by Rob OpenSuSE
Why? With KDE4 plus Firefox 512MB RAM was perfectly adequate in my
testing. In fact it is very hard to get the VM to use swap space at
all, without increasing the default value of "swapiness"; which I
think shows the default is poorly chosen.
Agreed. Those who make comments like that evidently don't have an
older system, or if they do, don't use it. I regularly run on older
hardware with few issue. Granted, it's not as fast as my 64bit
machines, but why should I spend $$ I don't have to upgrade? The
economy is not that great now, and it can be more expensive to upgrade
an older system than a newer one.
Post by Rob OpenSuSE
There's no reason to tell ppl to not use old machines, just because
you'ld not spec out such a machine for desktop. 256MB RAM is plenty
for many SOHO server applications.
I run a server with 128MB. It don't need much to just serve up files.
I hope you've sent your hardware profile via smolt. I'm pretty sure the
numbers there are the one people will look at when thinking about where to put
the optimization efforts:

http://smolts.org/static/stats/stats.html

Stano
Rob OpenSuSE
2009-01-09 10:30:48 UTC
Permalink
Post by Stanislav Visnovsky
Post by Larry Stotler
I run a server with 128MB. It don't need much to just serve up files.
I hope you've sent your hardware profile via smolt. I'm pretty sure the
numbers there are the one people will look at when thinking about where to put
Noone's asked for desktop to be optimised to 256MB RAM, it's the word
"requirement" and 1GB figure that looked like a very bad idea to me.
Precisely because of SOHO server applications, and least according to
most Internet pundits "it will never be the year of Linux on the
destkop" so we shouldn't forget non-GUI server side :)


Firefox 3 seems to have beneffitted a lot from the work to reduce
memory consumption. I've noticed it on a machine with 4GB RAM
(despite having AMD dual channel non-FSB memory controller), and the
result is much more pleasant, than the flabby feel of the older
version.

Years ago memory access on first micro I used, was 1 or 2 CPU cycles.
Now we have new chips with L3 caches taking 45 cycles, and system
memory a lot more (100+?); so the old "memory is cheap and fast" meme
doesn't hold so well now, even though memory is cheap, and faster
thand it used to be, compared to processing speed increase it has
lagged.
Post by Stanislav Visnovsky
openSUSE and Linux's RAM needs have really gotten out of hand lately.
I was running 10.2 on a PowerMac 7500/G4/700 with 256MB with no
problem. Now I need 512 or it's slow as molasses. Not a good trend
To me Qt4 & KDE4 don't seem to have blown up memory requirements.
Firefox 3 is working with less. But I can only compare 10.3
requirements on i686 with 11.1, not 10.2 directly. So am I the only
who wonders about an issue under Power arch, or simple configuration
issues like desktop search, despite fact that 512MB RAM is clearly a
more sensible system size than 256MB?
Larry Stotler
2009-01-09 15:55:38 UTC
Permalink
On Fri, Jan 9, 2009 at 5:30 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Noone's asked for desktop to be optimised to 256MB RAM, it's the word
"requirement" and 1GB figure that looked like a very bad idea to me.
Precisely because of SOHO server applications, and least according to
most Internet pundits "it will never be the year of Linux on the
destkop" so we shouldn't forget non-GUI server side :)
I could care less about Linux on anyone else's desktop, so long as it
works on mine. I gave up trying to enlighten people a long time ago.
I just charge them to fix their broken WinDoZe now.
Post by Rob OpenSuSE
Firefox 3 seems to have beneffitted a lot from the work to reduce
memory consumption. I've noticed it on a machine with 4GB RAM
(despite having AMD dual channel non-FSB memory controller), and the
result is much more pleasant, than the flabby feel of the older
version.
I dunno. I do know that on my Thinkpad A22p with a P3/1Ghz and 256MB
RAM, that Firefox takes up about 200MB after opening 1/2 a dozen tabs,
and that I use about 200MB of my swap. And, the machine has a bad RAM
slot. I'm going to try a 512MB chip at some point, but the machine is
only supposed to support 256MB modules.
Post by Rob OpenSuSE
Years ago memory access on first micro I used, was 1 or 2 CPU cycles.
Now we have new chips with L3 caches taking 45 cycles, and system
memory a lot more (100+?); so the old "memory is cheap and fast" meme
doesn't hold so well now, even though memory is cheap, and faster
thand it used to be, compared to processing speed increase it has
lagged.
Yeah, I've read about that. Adding the L3 isn't the boost that
they've tried to make it out to be.
Post by Rob OpenSuSE
To me Qt4 & KDE4 don't seem to have blown up memory requirements.
Firefox 3 is working with less. But I can only compare 10.3
requirements on i686 with 11.1, not 10.2 directly. So am I the only
who wonders about an issue under Power arch, or simple configuration
issues like desktop search, despite fact that 512MB RAM is clearly a
more sensible system size than 256MB?
10.2 ran great on older PowerMacs with 256MB. 11.0 seemed to be
slower, but it got better after an upgrade to 512MB. I actually have
a gig in my 9600, but it's not being used right now.

I have an old PowerBook 3400c that I finally upgraded to the max of
144MB, and a PowerMac 6500 with a max of 128MB. So, I don't expect
these lower end systems to work(I wish I could get a PPC version of
Damn Small....). However, my 9600 with a G4/700 and 1GB shouldn't be
slower than my Dell Dual Xeon 500Mhz with 512MB. However, these old
workhorses seem to be getting left behind by the major distros because
of all the new "features" being added like the glitz and search tools
and stuff.

Search tools should NOT be enabled by default. I work at a computer
shop, and I am constantly removing google search and all the others
because they get install and NEVER used, and they slow the computers
down.
Matt Sealey
2009-01-09 18:20:08 UTC
Permalink
Post by Larry Stotler
On Fri, Jan 9, 2009 at 5:30 AM, Rob OpenSuSE
Damn Small....). However, my 9600 with a G4/700 and 1GB shouldn't be
slower than my Dell Dual Xeon 500Mhz with 512MB.
Really? I would say it would just nip it. The G4 is a wonderful chip
but it's not a server-class SMP system.
Post by Larry Stotler
workhorses seem to be getting left behind by the major distros because
of all the new "features" being added like the glitz and search tools
and stuff.
I don't think the glitz OR search tools are too much of a problem. The
disk activity goes up, and use some CPU time, and when that's ocurring
it's usually during an idle spot. It has some implications for power
management - I'm not sure indexing stops when a system is on batteries
- what I object to is stuff like Beagle, which is an absolute monster.
C# apps take up way too much memory and ran far slower on PowerPC than
I would have expected from the fact that Mono has an officially
supported runtime.

The soaking up memory part is the main point though; having search
tools pushing memory usage so much is really infuritating. Especially
since absolutely none of them are as good as Google Desktop (to be
fair, I didn't install Google Desktop for Linux yet because I don't
actually have an x86 Linux box with a desktop to install it to. Even
if I did though would I get the same integration into the GNOME menu
etc.?

Whatever happened to Strigi? That was supposed to be the saving grace
of desktop-independent searches. KDE includes it by default now
because it needs it.. but I still got Beagle for my troubles. I also
thought I saw Strigi was in the plan for 11.1, but I didn't see any
reasons why it never got there. Not enough features? I know Novell
sponsors Mono, but really.. Beagle sucks.
Post by Larry Stotler
Search tools should NOT be enabled by default. I work at a computer
shop, and I am constantly removing google search and all the others
because they get install and NEVER used, and they slow the computers
down.
Do you ask your customers before you do this?

I use Google Desktop all the time, in fact I'd not know what to do
without it. I make sure all my systems are reporting to my Google
account, so when I search, it tells me which machine it's actually
on.. I'd be more than pissed off if you removed it from MY computer.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Larry Stotler
2009-01-09 19:01:35 UTC
Permalink
Post by Matt Sealey
Really? I would say it would just nip it. The G4 is a wonderful chip
but it's not a server-class SMP system.
Why need a server class system? That's a little overkill. faster P3
and G4 systems are still viable machines, especially when running
Linux. I write articles for a Mac site(when I have the time), and
refurbishing and using older hardware is what it's all about.
Post by Matt Sealey
I don't think the glitz OR search tools are too much of a problem. The
disk activity goes up, and use some CPU time, and when that's ocurring
it's usually during an idle spot. It has some implications for power
management - I'm not sure indexing stops when a system is on batteries
- what I object to is stuff like Beagle, which is an absolute monster.
C# apps take up way too much memory and ran far slower on PowerPC than
I would have expected from the fact that Mono has an officially
supported runtime.
I don't use any of that. Have no need for them. And, when you are on
older hardware with older video chips, glitz steals cpu power.
Post by Matt Sealey
Do you ask your customers before you do this?
Yep. And they are like "what's that?" Haven't had one in 6 months
that knew what it was or where it came from.
Post by Matt Sealey
I use Google Desktop all the time, in fact I'd not know what to do
without it. I make sure all my systems are reporting to my Google
account, so when I search, it tells me which machine it's actually
on.. I'd be more than pissed off if you removed it from MY computer.
But, that makes you the EXCEPTION. Also, I check for activity and
install date. If it's never ran, then it's not being used. That's
easy enough to determine. Perhaps more people would use it if they
really understood what they were installing. It's like all those
useless toolbars that get installed if you don't check to see what's
being installed.
Matt Sealey
2009-01-10 05:12:36 UTC
Permalink
Post by Larry Stotler
Post by Matt Sealey
Really? I would say it would just nip it. The G4 is a wonderful chip
but it's not a server-class SMP system.
Why need a server class system?
I never said you needed one. A Dual Xeon was compared to a G4. They're
not comparable in the same way you wouldn't compare a Celeron to a
Xeon these days.
Post by Larry Stotler
I don't use any of that. Have no need for them. And, when you are on
older hardware with older video chips, glitz steals cpu power.
If you're on older hardware with older video chips the glitz never
gets enabled in the first place - certainly nothing like Compiz and if
you mean stuff like Plasma does on KDE.. I have that running on a
400MHz G2_LE core with 128MB. It runs pretty well if you trim the
system first. This is an extreme case... it requires a hell of a lot
of work but it's definitely possible to knock KDE4 down so it's
basically featureless (exactly like you want it) with barely anything
done on startup except get into X and start networking.

Of course it's crippled the moment you start OpenOffice or so..
Post by Larry Stotler
Post by Matt Sealey
I use Google Desktop all the time
But, that makes you the EXCEPTION.
I really think more people than me actually use Google Desktop.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Larry Stotler
2009-01-10 13:12:07 UTC
Permalink
Post by Matt Sealey
I never said you needed one. A Dual Xeon was compared to a G4. They're
not comparable in the same way you wouldn't compare a Celeron to a
Xeon these days.
In a way. The Xeon's major advantage is the dual cpus and 100Mhz fsb.
However, the G4 is touted as being faster than the P3, so it's not
too off. Besides, a dual 500 is comparable to a 750Mhz chip in many
ways, so it's not that far off. The big disadvantage is the G4's
50Mhz memory bus.
Post by Matt Sealey
If you're on older hardware with older video chips the glitz never
gets enabled in the first place - certainly nothing like Compiz and if
you mean stuff like Plasma does on KDE.. I have that running on a
400MHz G2_LE core with 128MB. It runs pretty well if you trim the
system first. This is an extreme case... it requires a hell of a lot
of work but it's definitely possible to knock KDE4 down so it's
basically featureless (exactly like you want it) with barely anything
done on startup except get into X and start networking.
Which shows me it's just not a compelling alternative. The devs seems
to have fallen into the "it's there, let's waste it" trap. They have
forgotten their roots so to speak. While I don't have a problem
targeting new users, I don't think it should be done at the expense of
the "old timers" like me. I have seen no real features I care for in
KDE4, and KDE3 is more stable. Maybe KDE4 will get there. I have
requested that KPersonalizer get ported to qt4 so we can have a way to
turn off all those useless(to me) desktop effects.
Post by Matt Sealey
Of course it's crippled the moment you start OpenOffice or so..
Which is why I don't use it and don't recommend it. KOffice works
much better and does everything I need. Heck, I do just fine with
Wordpad in WinDoZe.
Post by Matt Sealey
I really think more people than me actually use Google Desktop.
Never meant that, but my experience has shown that the majority of my
customers don't use it even though it got snuck in on them. Same with
Windows Search. I disable the install ask, especially on slow
systems. If they want it, they can install it. I've seen it and
Google Search bring a system to it's knees just like beagle.
Rob OpenSuSE
2009-01-10 13:32:24 UTC
Permalink
Post by Larry Stotler
Post by Matt Sealey
of work but it's definitely possible to knock KDE4 down so it's
basically featureless (exactly like you want it) with barely anything
done on startup except get into X and start networking.
Which shows me it's just not a compelling alternative. The devs seems
to have fallen into the "it's there, let's waste it" trap. They have
Actually Qt4 and KDE4 have not increased memory consumption
particularly according to my observations.

There has been stuff added which is a matter of configuration. 11.1
just has a lot of issues right now, but testing pre-release on even 8
yr old hardware, I had acceptable performance for occasional desktop
use.

Alot of ppl with much newer machines have complained of performance
problems, so I think such a sweeping generalisation as the "it's
there, let's waste it" trap comment, is as erroneous as the 1GiB RAM
requirement.


There's a trend to more power efficient devices, that may be
encouraging developers not to use memory naively.
Matt Sealey
2009-01-12 17:36:17 UTC
Permalink
On Sat, Jan 10, 2009 at 7:32 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Post by Larry Stotler
Post by Matt Sealey
of work but it's definitely possible to knock KDE4 down so it's
basically featureless (exactly like you want it) with barely anything
done on startup except get into X and start networking.
Which shows me it's just not a compelling alternative. The devs seems
to have fallen into the "it's there, let's waste it" trap. They have
Actually Qt4 and KDE4 have not increased memory consumption
particularly according to my observations.
There has been stuff added which is a matter of configuration. 11.1
just has a lot of issues right now, but testing pre-release on even 8
yr old hardware, I had acceptable performance for occasional desktop
use.
Alot of ppl with much newer machines have complained of performance
problems, so I think such a sweeping generalisation as the "it's
there, let's waste it" trap comment, is as erroneous as the 1GiB RAM
requirement.
It runs great in 256MB and 512MB. As well as you could expect any OS
that does that to run in them - which is to say, fairly usable. I
remember back in the days when Windows 2000 was a limited beta and it
would install on a system with 24MB of memory. It ran like crap.
Systems with 32MB of memory - now, you could run Office on those! The
limiting factor was NOT the slow processors (30-60MHz Pentiums) or the
lack of RAM, but that when it DID swap, the ancient IDE controllers
would effectively lock the machine up. If you put a pretty decent
Promise IDE card in there (we're talking ATA66, I still have that
card) the whole thing would just pop to life.

In later betas and the final release they bumped the requirement to
64MB to gain a sort of baseline performance expectation. That is not
to say that it would not run in 24MB anymore (at least it would just
flake out during install because of a requirements check but you could
just swap in 64MB to install, and then go back to a lower size..
easily possible if you're using Ghost or PartitionMagic to push
sysprepped images), but there are many, many things that go with a
system that only has 24MB which limit performance far more.

We can bring this into the future somewhat, and point out that the
only reason the Efika (400MHz G2 PowerPC, no L2 cache, 128MB DDR2)
runs it so badly is because it has no DMA-enabled ATA driver. The
processor is fast, the memory is fast, but lack of fast swap space
really lets it down.

I cannot imagine you would ever have a PC with even 256MB that could
not get by with KDE4 - I actually have KDE4 running on my Via EPIA
M1000 as of last week now, and at 1GHz and with 256MB RAM, it's fine,
fine, fine (my only disappointed moment was when I found the unichrome
driver sucks, as does the unichrome DRI which will not load, so I
could not spin a desktop cube around. But otherwise it was snappy.).

So, KDE4 has reduced memory requirements and Qt4 is far better
optimized and more efficient, so where is the problem? Well, I'd say,
it's avahi-daemon, postfix, powerd, beagle, pick any daemon which
starts up at boot, or before the desktop, which has doubled in number
from 10.3 to 11.1 - I can't nitpick at the accessibility daemons but
the amount of stuff loaded on boot has gotten way out of control.

Let's consider something like the FUSE filesystem. Now, this does not
take too long to load, or take up too much memory, but it is a good
example of the operation. boot.fuse brings this up at boot time, way
before the desktop and way, way after everything has been pulled from
fstab. This may be necessary to, for instance, load the users' Windows
drive and mount it for the desktop. But why not make it so that this
drive's filesystem is only detected, drivers loaded, filesystems
mounted, as and when the user ACTUALLY goes to access it?

When you click a USB stick, it mounts, VFS layers kick in, and FUSE
modules are loaded. Do you really need the kernel driver around for 5
days before a user does this? Can't the module be loaded on-demand,
kept around until memory pressure or something else hits it? The same
would be true of anything else.

Postfix is another example of something which is just too big for its
boots. How many people actually really configure their system so that
SMTP mail goes directly through this daemon? It's only there because
cron needs an SMTP daemon. A user with a Netbook would never care.
Debian etc. use ssmtp which is much smaller, fulfils the requirements
exactly, and does not contain a full SMTP/LMTP compliant mail solution
with filters and scripting and a full mail queue which gets started on
boot and only hangs around waiting for a cron job to fail. It takes
ages to start, and soaks up resources. If someone really does need
postfix, they can grab a postfix pattern, I mean.. why not? Or, what
if they like exim better? Postfix is hard to uninstall once it's there
for a novice who wants to get rid of it :)

I am sure there are plenty of other things which could be deferred as
services (is this even possible with init or any other boot process?)
until really needed, or simply cut out or replaced with
lower-resource-using systems.
The obvious trick is to use memory when you need it, load things when
they're about to be used.. even Windows manages to install a billion
services on a system, but Windows Installer is only started when
you're installing something, Windows Live Communications Platform is
only started when you start Windows Live apps, IMAPI CD burning
doesn't start unless I'm about to burn a CD. VirtualBox doesn't start
it's service until you're loading VirtualBox. And Contrast VMware
which manages to soak up 200MB of auth, nat, disk mounting and dhcp
relay daemons before you even start a VM.. and this is just from
VMware Player (which I have not run since I installed it)
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Greg KH
2009-01-12 17:42:55 UTC
Permalink
Post by Matt Sealey
I cannot imagine you would ever have a PC with even 256MB that could
not get by with KDE4 - I actually have KDE4 running on my Via EPIA
M1000 as of last week now, and at 1GHz and with 256MB RAM, it's fine,
fine, fine (my only disappointed moment was when I found the unichrome
driver sucks, as does the unichrome DRI which will not load, so I
could not spin a desktop cube around. But otherwise it was snappy.).
I'm working on resolving this. The SLED11 kernel should have support
for this, and so the next updated 11.1 kernel will also get it "for
free". You'll have to pull in a different xorg driver package, which I
think just got checked in, but I haven't looked to be sure.

After SLED11 is released, and you still have problems with this, let me
know and I'll be glad to try to work through it, as I have a laptop with
this chipset in it too.

thanks,

greg k-h
Matt Sealey
2009-01-09 18:28:00 UTC
Permalink
On Fri, Jan 9, 2009 at 4:30 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Years ago memory access on first micro I used, was 1 or 2 CPU cycles.
Now we have new chips with L3 caches taking 45 cycles, and system
memory a lot more (100+?); so the old "memory is cheap and fast" meme
doesn't hold so well now, even though memory is cheap, and faster
thand it used to be, compared to processing speed increase it has
lagged.
I'm not sure I agree with you here. Memory is cheap and fast, but CPU
cycles have gotten shorter. 11 cycles on a QDR 800MHz bus goes much
faster than 2 cycles on a 33Mhz bus, if it was even that. Even 140
cycles to main memory is faster. And once you get over the latency,
the data is burst in and cached for longer.

Access *times* are ~10 to ~100 times better than they were when you
remember. Don't think of them in cycles, because bus speeds change.
Think of them as a proportion of your total CPU speed and against your
total memory bus speed (be it an internal controller or a FSB).

My problem with MY systems is, I have a 128MB box I want to run a
desktop on, and the system is NOT upgradable. However it's nice, fast
DDR2 (but no L2 to back it up, poor little embedded processors..) and
is still a hell of a lot faster doing random memory accesses with
badly formatted data than old micros that did memory accesses in 2
cycles.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Rob OpenSuSE
2009-01-10 02:37:05 UTC
Permalink
Post by Matt Sealey
On Fri, Jan 9, 2009 at 4:30 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Years ago memory access on first micro I used, was 1 or 2 CPU cycles.
Now we have new chips with L3 caches taking 45 cycles, and system
memory a lot more (100+?); so the old "memory is cheap and fast" meme
doesn't hold so well now, even though memory is cheap, and faster
thand it used to be,
compared to processing speed increase it has lagged.
^^^^^^^^^^^^^ see relative comparision
Post by Matt Sealey
I'm not sure I agree with you here. Memory is cheap and fast, but CPU
cycles have gotten shorter. 11 cycles on a QDR 800MHz bus goes much
faster than 2 cycles on a 33Mhz bus, if it was even that. Even 140
cycles to main memory is faster. And once you get over the latency,
the data is burst in and cached for longer.
On old systems, bloat was causing a few megabytes of extra memory
access, now it can be 100-400MB. And it's no 11 cycles, but CPUs wait
100's on cache misses, never mind if there's a page fault and disk
access involved.

In relative terms, memory has become slower, so even on systems which
never have memory pressure, you don't want your desktop programs to
all presume they can use major chunks of physical memory, as if they
had system all to themselves.
Matt Sealey
2009-01-10 05:18:42 UTC
Permalink
On Fri, Jan 9, 2009 at 8:37 PM, Rob OpenSuSE
Post by Rob OpenSuSE
Post by Matt Sealey
On Fri, Jan 9, 2009 at 4:30 AM, Rob OpenSuSE
I'm not sure I agree with you here. Memory is cheap and fast, but CPU
cycles have gotten shorter. 11 cycles on a QDR 800MHz bus goes much
faster than 2 cycles on a 33Mhz bus, if it was even that. Even 140
cycles to main memory is faster. And once you get over the latency,
the data is burst in and cached for longer.
On old systems, bloat was causing a few megabytes of extra memory
access, now it can be 100-400MB. And it's no 11 cycles, but CPUs wait
100's on cache misses, never mind if there's a page fault and disk
access involved.
In relative terms, memory has become slower, so even on systems which
never have memory pressure, you don't want your desktop programs to
all presume they can use major chunks of physical memory, as if they
had system all to themselves.
You're talking a lot of crap, frankly.

You can't possibly think that memory access latency has increased
compared to the processors you used to use. In relative terms, what is
that supposed to mean? Try measuring memory access latency against
clock speed in relative terms. A 1MHz 6502 taking 1 or 2 cycles to
access main memory is not faster relatively than a 1.8GHz Core 2 Duo
taking 14 or 15 cycles to access L2 cache, and still not faster than
taking ~150 to access main memory on the event of a cache miss. And
the result of a cache miss and the overhead involved outside of the
actual access latency is NOT something you can criticize. It gives
far, far more benefits than having no cache at all.

Yes, I agree with you about bloat, but while memory access hasn't
scaled with processor speed, it is certainly not "relative slower"
than it was 10 years ago.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Rob OpenSuSE
2009-01-10 12:02:39 UTC
Permalink
Post by Matt Sealey
On Fri, Jan 9, 2009 at 8:37 PM, Rob OpenSuSE
Post by Matt Sealey
On Fri, Jan 9, 2009 at 4:30 AM, Rob OpenSuSE
You're talking a lot of crap, frankly.
As you've become rude, I'll just suggest you clue up by finding some
CPU architecture explanations.
Matt Sealey
2009-01-12 16:55:08 UTC
Permalink
On Sat, Jan 10, 2009 at 6:02 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Post by Matt Sealey
You're talking a lot of crap, frankly.
As you've become rude, I'll just suggest you clue up by finding some
CPU architecture explanations.
Oh really? Wow. I didn't know I could spend 11 years designing and
implementing embedded systems on resource-constrained designs, and not
know a thing about how computars works! I will get right onto the
inter-tubes right away and find out more!!

Your little dissertation on how memory access has gotten slower
relative to older processors is naive at best, and wrong if you sat
down with a real chip and looked at it's real performance. A 100 cycle
latency for accessing memory on a cache miss - is not and cannot take
longer on a chip capable of running 3 billion cycles a second compared
to one which may only do 30 million or even 300 million with the same
latency. This is simple math; latency relative to cycles taken has
gone up (from 1 or 2 up to perhaps several hundred in designs like the
Pentium 4) but the data transfer capabilities of the memory
controller, the size of the L2 cache itself on these designs, means
that 1 cycle scaled up would take far longer a percentage.

I really do not see how you could equate, at a stab, a 30MHz processor
taking 20 cycles from a 16MHz, 16-bit memory bus being faster at
accessing memory than a 3GHz processor taking 200 cycles from a
1066Mhz 64-bit (or 128-bit) memory bus, even in relative terms, and
you can poke around with performance counters in x86 and PowerPC
processors and see this proved out in reality.

The "Finnish democoder competition" method of coding where every cycle
counts and everything has to load in 64k really won't make KDE run
faster, nor does it explain why increased USE of memory (which is not
down to the desktop itself, since KDE4 for example uses 10% less
memory here than KDE3, which has "less features") would slow a system
down simply because it has to access more of it. I dare say cache
management algorithms both in hardware and software have advanced
enough that your point is absolutely moot even if it WERE true on a
basic, high-school textbook level.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Rob OpenSuSE
2009-01-12 18:39:40 UTC
Permalink
Post by Matt Sealey
On Sat, Jan 10, 2009 at 6:02 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Post by Matt Sealey
You're talking a lot of crap, frankly.
As you've become rude, I'll just suggest you clue up by finding some
CPU architecture explanations.
Your little dissertation on how memory access has gotten slower
relative to older processors is naive at best, and wrong if you sat
down with a real chip and looked at it's real performance.
You actually appeared to agree, with "realtively slower" a few lines
down later in your statement that included the word scaling; which
usually applies to capacity and size comparisons not relative
performance. Misrepresenting and misquoting is not helpful to a
constructiive discussion, nor the sentiment you've shown.

Again IMO the same functionality is not generally consuming more
memory in newer software, in fact it is clear that many projects are
avoiding that for good reasons, despite a large increase in number of
2-4GiB destkop boxes, they've taken steps to reduce memory footprint.
Matt Sealey
2009-01-12 20:02:15 UTC
Permalink
On Mon, Jan 12, 2009 at 12:39 PM, Rob OpenSuSE
Post by Rob OpenSuSE
You actually appeared to agree, with "realtively slower" a few lines
down later in your statement that included the word scaling; which
usually applies to capacity and size comparisons not relative
performance. Misrepresenting and misquoting is not helpful to a
constructiive discussion, nor the sentiment you've shown.
I said that taking 1 cycle of a 30MHz system does not mean that 300
cycles of a 3GHz system is slower, just because it takes more cycles.
Every memory access on the older system will take that amount of time,
whereas accesses to L2 on a lot of systems may take 11-14 cycles
(using a G4 as a reference, it went up from 11 to 12, then to 14 with
ECC enabled on a 7448 vs. 7447A). Access of data already in L1 is
practically free (i.e. less time than the instruction, which is
usually 1 cycle for 95% of instructions on PowerPC)

Looking at a CPU holistically you take into account the sum of it's
parts, and the way they interact, and how this affects your code and
performance. You are fixated on how many cycles it takes. You're also
using it as an argument about using MORE memory. Both of these are
wrong.
Post by Rob OpenSuSE
Again IMO the same functionality is not generally consuming more
memory in newer software, in fact it is clear that many projects are
avoiding that for good reasons, despite a large increase in number of
2-4GiB destkop boxes, they've taken steps to reduce memory footprint.
And how does some app using more memory for a task somehow reduce it's
performance because memory latencies and access times have increased
over the years?

Using 10MB or 100MB of memory for something makes no difference if
your L2 cache is 256kb - you can never fit all of it in, so it will go
to main memory at some point.. on a system where you have no L2 cache,
you will nearly always be looking at main memory. In these situations
with modern processors, the time taken to perform a miss, fetch the
new data, is still much less in real measurable time, than it was on
older processors with more limited architectures. Every time you swap
from application to application, large swathes of data will need to be
loaded in - and caches effectively flushed so as to make room for the
currently running task and not the previous one. Embedded processors
allow locking code into L1 or L2 for this purpose - for instance
something you need to be there all the time. Linux doesn't bother.

We're not really concerned here with how much time a CPU wastes
accessing main memory. It's pretty clear that no component running on
a desktop Linux is going to entirely fit in there, and in the end,
this makes it a pretty moot point.

What is to be concerned about is how a desktop system that had minimum
requirements of 256MB a couple years back now has requirements - even
with the purported improvements in memory usage for KDE4 for example -
which are upwards of 512MB. These are obviously not the fault of the
desktop itself, but perhaps the new technologies that came with it.
Search tools (like Beagle, which has some enormous resource usage
which may be down to Mono or may not. I know ATI Catalyst Control
Center doesn't do much more than the old ATI Control Panel did on
Windows, but it still takes up 60MB on boot here. That might be a
quarter of the memory in a system, and cause premature use of the
paging file, which shouldn't be too bad on most systems, but on others
(see previous mail about slow hard disks. Contrast speeds of
USB-connected disks or single-level NAND Flash) can cause real
problems. That is a lot of memory for a couple of sliders..)

It has nothing to do with memory access times but of the overall use
of memory. A Linux desktop should boot in 256MB (as the installer
won't install in less) and have a large amount left for applications -
currently it soaks up just about 210MB on my Pegasos and Via EPIA
after login to the GNOME desktop - no Compiz etc. enabled. I think
this is somewhat unacceptable. It's workable but, it could be less.

If you have 4GB or 8GB of memory it is nothing to care about, but
users put in this memory to run applications, not to provide space for
50 boot services which sit idle, most of which are only there to
provide people with large amounts of memory to get things done
quicker. This is not very friendly to those who run in more
constrained environments. I'm not looking for GNOME etc. to run in
128MB (although in 10.3 it did, and I had enough memory left to run
applications before swapping) but reducing the memory footprint of the
basic install would be awesome.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Larry Stotler
2009-01-12 21:25:16 UTC
Permalink
Post by Matt Sealey
If you have 4GB or 8GB of memory it is nothing to care about, but
users put in this memory to run applications, not to provide space for
50 boot services which sit idle, most of which are only there to
provide people with large amounts of memory to get things done
quicker. This is not very friendly to those who run in more
constrained environments. I'm not looking for GNOME etc. to run in
128MB (although in 10.3 it did, and I had enough memory left to run
applications before swapping) but reducing the memory footprint of the
basic install would be awesome.
This is the same problem that windows has. Too many apps think they
are the most important thing you'll never use. Like I said, they'll
use it cause it's there and only take into consideration their program
and not the fact that every other dev seems to feel the same way(not
that I'm accusing the devs here, but it's a problem everywhere). I'm
gonna have to go on a hunt to try to remove a lot of unnecessary stuff
without breaking anything......
Rob OpenSuSE
2009-01-13 11:49:39 UTC
Permalink
Post by Larry Stotler
Post by Matt Sealey
If you have 4GB or 8GB of memory it is nothing to care about, but
users put in this memory to run applications, not to provide space for
50 boot services which sit idle, most of which are only there to
provide people with large amounts of memory to get things done
quicker. This is not very friendly to those who run in more
constrained environments. I'm not looking for GNOME etc. to run in
128MB (although in 10.3 it did, and I had enough memory left to run
applications before swapping) but reducing the memory footprint of the
basic install would be awesome.
Linux does not swap!!! It's a demand paged virtual memory system.
Those "idle" processes, will relinquish their clean pages, to be
re-read in on demand by page faults from disk, and anonymous dirty
pages can be saved in the (misnamed) swap space by the VM.

Unfortunately the value of "swappiness" set, seems to be 60, which
according to my tests appears too low, so that even on a 512MiB
system, swap space is generally unused, for typical "netbook" like
usage. Increasing the value to 95 or 100, did get some pages written
to the swap space, thus increasing the memory available for
applications and kernel caches.
Post by Larry Stotler
This is the same problem that windows has. Too many apps think they
are the most important thing you'll never use.
Any Specifics?
Larry Stotler
2009-01-13 13:24:42 UTC
Permalink
On Tue, Jan 13, 2009 at 6:49 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Post by Larry Stotler
This is the same problem that windows has. Too many apps think they
are the most important thing you'll never use.
Any Specifics?
That's a big list, but here's a couple of examples on WinDoZe:

Java
Quicktime
iTunes
Any Search program
CD Burning programs that override the Windows default settings when
you insert media
Instant Messengers(The average at my shop is 3 installed - MSN being
the most often seen)
Printer utilities that don't need to run
Office quickstart programs
Hardware utility programs(video, sound, etc) that no one ever tweaks.
Advanced Text Servcies and the language bar(which is really only good
for east asian users)
Firewall(for a desktop on a hardware router, it's pointless. For a
laptop that's on different connections it's not). Both Windows and
Linux on this one.

Also, the Prefetch and Superfetch schemes will preload things like
installer programs that you only use one time. If you turn off
Superfetch, you can actually use Vista with 1GB RAM.

The biggest speed boost in Windows is using msconfig and turning off
all of the startup programs and services that don't need to be run.

I've even seen where I have uninstalled stuff like Norton and Macafee
and FSecure and they still have stuff loaded on startup. That's why
Symantec and Macafee have removal tools, and then they still don't get
everything. And, just try installing a current version of Norton or
Macafee on any machine slower than 2Ghz with 512MB and watch your
machine turn into molasses.

Fortunately, Linux is nowhere near that bad, but openSUSE still
installs a lot of what I fell are unnecessary programs like Beagle,
OpenOffice(KOffice is much better and uses less resources if you use
KDE), AppArmour, etc. I realize that the devs want to make the
install easier and that they choose a specific set of apps to install,
but, from my experience, 90% of the people never even use a Desktop
search program, so it's just a waste. Also, YaST can make it a pain
to uninstall stuff. Try removing OpenOffice, AppAromour or Beagle in
11.0, and you'll get dependency complaints. If the main module of a
program is selected for removal, all parts of it should be removed
unless they have a dependency elsewhere(which they shouldn't). I
haven't tried it in 11.1 yet.

The idea of preloading some things for faster startup speed is
laudable, but then again, it has huge tradeoffs on slower systems.
It's a shame there's not an easy to use tweaking program for Linux.
YaST has some things, but they aren't as robust as I would like to
see. I'd write one, but I'm not a programmer, so.....
Matt Sealey
2009-01-13 15:36:42 UTC
Permalink
Post by Rob OpenSuSE
Post by Larry Stotler
Post by Matt Sealey
If you have 4GB or 8GB of memory it is nothing to care about, but
users put in this memory to run applications, not to provide space for
50 boot services which sit idle, most of which are only there to
provide people with large amounts of memory to get things done
quicker. This is not very friendly to those who run in more
constrained environments. I'm not looking for GNOME etc. to run in
128MB (although in 10.3 it did, and I had enough memory left to run
applications before swapping) but reducing the memory footprint of the
basic install would be awesome.
Linux does not swap!!! It's a demand paged virtual memory system.
Those "idle" processes, will relinquish their clean pages, to be
re-read in on demand by page faults from disk, and anonymous dirty
pages can be saved in the (misnamed) swap space by the VM.
I really don't think there is any distinction to be made here.

You swap a page in physical memory for one on some backing store. The
terminology is naive but correct.
Post by Rob OpenSuSE
Unfortunately the value of "swappiness" set, seems to be 60, which
according to my tests appears too low, so that even on a 512MiB
system, swap space is generally unused, for typical "netbook" like
usage. Increasing the value to 95 or 100, did get some pages written
to the swap space, thus increasing the memory available for
applications and kernel caches.
On a 128MB system it gets there pretty fast; on 10.3 we had ~40MB space
left (which was then soaked by buffers and caches as is good to do).
However loading anything else in started really going at the page file.

On a slow disk this is an absolute nightmare.

There are two schools of thought on this; one, is that applications
should be stored in RAM and RAM only until significant memory pressure
arises. The other, is that data should live mostly in swap and only be
put into RAM when it's actually used.

Windows likes to keep it the second way and there is a kernel developer
who likes swappiness set really high (I forget his name.. was it Andrew
Morton?). As long as you have enough of a swap cache and a fast enough
disk this is awesome. I actually think this is the right way to do it.

However, if your disk is slow (USB, NFS, DMA-less ATA), you're pretty
much tied to how fast your disk is, which is.. very bad indeed. Couple
it with a low amount of memory, and you basically have no way out.
Post by Rob OpenSuSE
Post by Larry Stotler
This is the same problem that windows has. Too many apps think they
are the most important thing you'll never use.
Any Specifics?
There are none :D

Applications should allocate all the memory they'll need and the OS
should work out what to do with it.

As things like KDE and GNOME create more and more tasks and external
applications and rely on more services, the memory consumption of the
original tasks may go down, but it is then canceled out by the memory
consumption of the extra feature and extra external application, plus
any abstraction layered between.

We're not talking anymore about "KDE uses too much memory as an
application" but "SUSE loads far too much on boot". It's not just the
desktop.. it's every service, every module, that is installed by default
and enabled by default, which significantly reduces the amount of memory
available for future tasks.

Since most of them are essential to start other required boot services
not much can be done about it, but definitely things like postfix are
too big for the job they're meant to do (supply a dependency for cron)
and certain boot tasks can be and should be deferred until the actual
need arises before being started. Networking need not be brought up -
i.e. access to the internet etc., since you need lo set up for a lot of
things - until the user is at the desktop. On a Netbook this may be the
wireless card, and they may have to pick an SSID first.

Imagine that you do this on boot, on a new location - it sits in a
console, trying to access an SSID it remembered from the last time. It
doesn't exist. It has to scan.. and then fail. If it exists (maybe an
unsecured network with a common name such as "NETGEAR", it still has to
scan for it then configure and run DHCP over it, and potentially fail.
This will increase boot time only so the user can get to a desktop and
pick the right SSID and connect to the correct network as they wish.

Unless you are booting into a system which does it's user authentication
over the network (Active Directory, LDAP, or crazy people who still use
YP etc.) then you don't need to bring up the network until they first
start an app that needs networking. On openSUSE this may be the Updater
app, although it SHOULD sit and wait for something else to access the
internet (i.e. notification from NetworkManager that a connection was
made and verified up, rather than forcing network access).

On demand is definitely a good way to go for anything, as it reduces
memory requirements up to the point something needs to be done. If you
do not have enough memory to do it at that point, then tough luck.. time
to buy new RAM. But if you never use it (Avahi for example, I have no
mDNS-compatible devices here) and never go to a share browser or go to
print a file, it shouldn't even be loaded.. the moment I do get one of
these lovely little printers or a Mac sitting here, I want to be able to
use it. Having the stuff on disk is great, loading it at boot and
soaking xMB of memory until that point, is wasteful.

-- Matt
Larry Stotler
2009-01-13 16:32:21 UTC
Permalink
On a 128MB system it gets there pretty fast; on 10.3 we had ~40MB space left
(which was then soaked by buffers and caches as is good to do). However
loading anything else in started really going at the page file.
On a slow disk this is an absolute nightmare.
Memory usage was much lower on the 10.x system from my experience as well.
On demand is definitely a good way to go for anything, as it reduces memory
requirements up to the point something needs to be done. If you do not have
enough memory to do it at that point, then tough luck.. time to buy new RAM.
But if you never use it (Avahi for example, I have no mDNS-compatible
devices here) and never go to a share browser or go to print a file, it
shouldn't even be loaded.. the moment I do get one of these lovely little
printers or a Mac sitting here, I want to be able to use it. Having the
stuff on disk is great, loading it at boot and soaking xMB of memory until
that point, is wasteful.
Agreed. However, since openSUSE is one of those kitchen sink thrown
in distros, they have to balance it. I wish that SLICK had panned
out.

I've looked at SUSE Studio, but since it's still in alpha, you can't
get much access to it yet. And, it seems to only have i386 support
right now. What I need that for is PPC more than anything else. I
have a Powerbook 3400c that maxes at 144MB RAM and a PowerMac 6500
that maxes at 128MB. I don't expect to run a full desktop like KDE or
Gnome on them, but it would be nice to be able to run a basic like
TinyWM and Firefox. In that case, you don't need much other than the
services to start the system, and networking for wired or wireless.

I can actually go online with my Thinkpad 380XD P/233/96MB RAM(Maxed
out) with Win2k. It's not fast to say the least but it works. That's
with FF 2.x. Haven't tired 3.x. And that's with a USB wireless
adapter. Of course, I can do more if I use Damn Small Linux and don't
expect to run a full modern desktop on hardware that old, but it would
be interesting to see how usable such old machines can be.
Rob OpenSuSE
2009-01-13 17:45:23 UTC
Permalink
Post by Larry Stotler
Agreed. However, since openSUSE is one of those kitchen sink thrown
in distros, they have to balance it.
I don't expect to run a full desktop like KDE or
Gnome on them, but it would be nice to be able to run a basic like
TinyWM and Firefox. In that case, you don't need much other than the
services to start the system, and networking for wired or wireless.
So have you tried using a netinstall, and then selecting pure X or
XFCE? There's also LXDE which you may be able to try out later, from
OBS.

You have to realise that the memory required to access modern websites
is high, so old low memory hardware (now 96MiB and 144MiB) just aren't
going to be realistic choices, for general Desktop usage.
Matt Sealey
2009-01-13 18:48:50 UTC
Permalink
On Tue, Jan 13, 2009 at 11:45 AM, Rob OpenSuSE
Post by Rob OpenSuSE
Post by Larry Stotler
Agreed. However, since openSUSE is one of those kitchen sink thrown
in distros, they have to balance it.
I don't expect to run a full desktop like KDE or
Gnome on them, but it would be nice to be able to run a basic like
TinyWM and Firefox. In that case, you don't need much other than the
services to start the system, and networking for wired or wireless.
So have you tried using a netinstall, and then selecting pure X or
XFCE? There's also LXDE which you may be able to try out later, from
OBS.
Xfce is better but it's still not "officially" supported by SUSE so
the actual integration into the rest of SUSE as a whole is absolutely
terrible. No SUSE theming, the boot manager is ugly-as-sin xdm (which
is f**king crazy since the installer drags in gdm), all the settings
apps are just dumped into a big long menu (mixing Xfce, GNOME and YaST
choices which means scrolling 3 pages to find anything), panels are
set to their defaults, you can't GET to anything useful or DO anything
useful on this desktop. SUSE has traditionally set up the default
desktop to at least be windows-alike in nature to make people
transitioning more comfortable. Not the case with Xfce, and not
because they decided "people like it better that way". And that means,
not even a green desktop by default.
Post by Rob OpenSuSE
You have to realise that the memory required to access modern websites
is high, so old low memory hardware (now 96MiB and 144MiB) just aren't
going to be realistic choices, for general Desktop usage.
This is not old low memory hardware but specifically designed new
hardware. 128MB was determined a couple years back to be a
cost-effective and pretty usable amount of memory to run something
like Xfce in. Between 10.3 and 11.1 it's changed and while nothing
changed about Xfce (it's still ugly, and useless, but the only thing
that will actually get to desktop without forcing swap usage) the rest
of the system has bloated out past the line. I have nothing left in
11.1 but to have the system swap.

If you imagine the efforts on LTSP etc., a thin client needn't and
definitely shouldn't be specced like a "general desktop". If you
needed a 2GHz dual core and 2GB of RAM in every thin client, what
would the point of having thin clients be? Where is the cost or power
saving? It is a dumb concept, but this is what Linux forces. I
remember we did get a full GNOME desktop with Compiz enabled running
on our hardware;

http://www.powerdeveloper.org/movie/iris

Do you see any huge slowdowns? This is a 400MHz PowerPC, with no L2
cache, 128MB of RAM and a 64MB Radeon 9250 (r200). This demo was built
with Gentoo. You can see it slowing a little at points; but this is
lack of CPU power, and not much else.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Rob OpenSuSE
2009-01-13 16:49:08 UTC
Permalink
Post by Matt Sealey
Post by Rob OpenSuSE
Post by Larry Stotler
Post by Matt Sealey
If you have 4GB or 8GB of memory it is nothing to care about, but
users put in this memory to run applications, not to provide space for
50 boot services which sit idle, most of which are only there to
provide people with large amounts of memory to get things done
quicker. This is not very friendly to those who run in more
constrained environments. I'm not looking for GNOME etc. to run in
128MB
Linux does not swap!!! It's a demand paged virtual memory system.
Those "idle" processes, will relinquish their clean pages, to be
re-read in on demand by page faults from disk, and anonymous dirty
pages can be saved in the (misnamed) swap space by the VM.
I really don't think there is any distinction to be made here.
You swap a page in physical memory for one on some backing store. The
terminology is naive but correct.
Except that as someone of your background knows, that a "swapping" ie.
evicting all memory pages of a process, saving them to disk, is a
different memory technique, commonly used in past on multi-user
systems when physical memory was larger than the logical address space
to processes.

The point is, those "idle processes" do not sit there, hogging RAM.
Post by Matt Sealey
However loading anything else in started really going at the page file.
On a slow disk this is an absolute nightmare.
I've seen heavy usage of swap space, on 256MiB system, under KDE4
desktop, FF3 and YaST software management, which really did want
nearer 400MiB, than 256MiB. And I actually used the slowest disk I
have that actually does UDMA correctly (a 4GB model circa 1999).

But what are reasonable expectations in this situation? It is clear
that switching between those tasks, that the working set size is
exceeded, and a delay is inevitable. The system did not go into a
meltdown where, it took minutes rather than seconds to become
responsive.
Post by Matt Sealey
There are two schools of thought on this; one, is that applications should
be stored in RAM and RAM only until significant memory pressure arises. The
other, is that data should live mostly in swap and only be put into RAM when
it's actually used.
Windows likes to keep it the second way and there is a kernel developer who
likes swappiness set really high (I forget his name.. was it Andrew
Morton?). As long as you have enough of a swap cache and a fast enough disk
this is awesome. I actually think this is the right way to do it.
However, if your disk is slow (USB, NFS, DMA-less ATA), you're pretty much
tied to how fast your disk is, which is.. very bad indeed. Couple it with a
low amount of memory, and you basically have no way out.
It's whether you write dirty pages to swap, which are not backed by
files, for instance executable programs. When you look for a new
page, it's always faster to throw away a clean or a cache page. The
consequence of keeping all written to data pages in memory, and not
having them saved by the backing store, is that you never reclaim
those pages, from the idle processes, complained about.

My test backs up Andrew Morton, without swappiness being set high,
swap space is unused, which means the system is doing more page faults
than it otherwise would because, it's not able to evict data pages of
little used processes in favour of working set of running processes,
and OS cache.

Where it makes sense to have swappiness set low, is when nightly
(luncthime?) batch jobs are run, doing things like index builds for
desktop searches, locate or something like a virus scanner, which are
done once, and you know that the files they read in won't be
reaccessed; though there are ways for applications to hint that
they're "read once". That reduces the perceivable lag, when an idle
desktop application is used and needs to reclaim memory.

Small memory 128MB, slow disk (USB, NFS, PIO-ATA) and running large
programs, look like badly specified configuration to me. Not having
anon data saved in a swap space isn't going to help things. You still
will seek like crazy, servicing page faults.
Post by Matt Sealey
Post by Rob OpenSuSE
Post by Larry Stotler
This is the same problem that windows has. Too many apps think they
are the most important thing you'll never use.
Any Specifics?
There are none :D
This is the problem in the thread. I've mentioned specific counter
examples, to disagree with certain generalisations, and suggested
configuration issues (candidates like nepomuk / beagle). Nothing
constructive comes out of generalisations.
Post by Matt Sealey
We're not talking anymore about "KDE uses too much memory as an application"
but "SUSE loads far too much on boot". It's not just the desktop.. it's
every service, every module, that is installed by default and enabled by
default, which significantly reduces the amount of memory available for
future tasks.
The thing is, the system runs fine with 256 MiB, if you're not
expecting to both browse and run other applications. These idle
processes should only affect boot times.

Finally OS provides a "Net Install" which allows fine-grained choices,
and you can install a minimal system. If you have a low amount of
memory it's unreasonable not to use that option, but expect defaults
made for Live CD or a general desktop install to be altered, that
inconveniences the less knowledgeable desktop user.
Matt Sealey
2009-01-13 18:34:37 UTC
Permalink
But if you never use it (Avahi for example, I have no mDNS-compatible
devices here) and never go to a share browser or go to print a file, it
shouldn't even be loaded.. the moment I do get one of these lovely little
printers or a Mac sitting here, I want to be able to use it. Having the
stuff on disk is great, loading it at boot and soaking xMB of memory until
that point, is wasteful.
I just had a thought. It there even a way to check for the presence of
a service and then start it under Linux? I thought D-Bus would handle
this kind of thing.. but in the end if you want to, say, make sure
daemons are loaded so you can perform discoveries and connect to
shares or printers, what do you talk to, in order to find out whether
it's running and whether it should be started?

I understand that Avahi can't exactly be killed off if you want it to
work on an mDNS environment - it has to be around so it can accept the
multicasts and then notify of device presence. But it's an example of
something large, which if you are not on this kind of network, is not
needed.

I cannot say the same for hplip or cups. On most desktop systems these
services are started (at least in 11.0 and 11.1) - on most desktop
systems the printer is connected locally so it is not like you need an
active print queue running 100% of the time to accept data. You will
know when you want to print because, something requests to print. Or
that it how it should be. As for hplip, what if you do not even own an
HP printer, scanner or fax? This is the kind of thing where printing
support should be installed, but nothing actually activated until you
first add a printer (and left disabled until you actively want to add
a printer through YaST2 - and if it IS an HP all-in-one, then by all
means start hplip!)

It is commendable for the installer to search for TV cards, ISDN/DSL
and printers, but if none are found, at the very least put the
software in a holding pattern so it needn't be supported. About things
like smartd - I can see this load on my Efika but in older SUSE (not
11.0 since it works there) the smart tools command lines say smart
support is disabled and I need to start it with "-s on" - but it never
worked. The daemon was still running though. This is not sane. I
didn't set up any LVM2 stuff yet but boot.device-mapper still runs and
I get dm-mod in my kernel. This is the sort of thing which should be
there if you ran the GUI to set it up or enabled it at any point in
the installer, but not otherwise (there is no way I can think of that
would require dm-mod to be loaded, where you did not already configure
a LVM2 etc. or not actively in the process of configuring one)

Unfortunately the rc.d dependency system only seems to work within
itself (so a service relies on other services to start), but an
application can't particularly query whether a service is started, or
stopped (except if it publishes itself in D-Bus), or make a request
for it to be started once rc.d stuff is over? I don't think SuSE
manages this at all (using traditional init) although replacements
like upstart, launchd, initng all install a small daemon which can do
service control.. I bet most of them suck.

Now I am looking at it I am really, really disappointed in the state
of affairs of Linux which has been relying on the stupid runlevel
system (which has absolutely no fine-grained service control at all)
since the dark ages and has not come out of it. While Windows only has
3 service states (Automatic, Manual and Disabled), these can be
stopped and started at any time, programmatically, through service
names which are well known to the applications that use them. On any
Linux distribution any of these "services" (let's assume they're all
in rc.d) can be stopped and started but the names change per distro..
there is no registry or namespace like mDNS or D-Bus recommends in
order to get to this stuff.

Sigh. Oh well.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Putrycz, Erik
2009-01-11 07:02:52 UTC
Permalink
Post by Rob OpenSuSE
On old systems, bloat was causing a few megabytes of extra memory
access, now it can be 100-400MB. And it's no 11 cycles, but CPUs wait
100's on cache misses, never mind if there's a page fault and disk
access involved.
In my experience, it takes 2 generation of software to get things right.
These days, you are not going to ask a developer to program in assembler
just to get the most optimal memory allocation.
The complexity of the desktop environments is going to grow constantly,
and this is not a trend one can fight. Same for the underlying
technologies, I don't think the devs see too much the memory
consumption, many of it is hidden in all the frameworks involved.
And often the first generation of a piece of software will focus on
functionality and stability, and it takes another generation to profile
and optimize.

Erik.
Rob OpenSuSE
2009-01-11 15:56:05 UTC
Permalink
Post by Putrycz, Erik
Post by Rob OpenSuSE
On old systems, bloat was causing a few megabytes of extra memory
access, now it can be 100-400MB. And it's no 11 cycles, but CPUs wait
100's on cache misses, never mind if there's a page fault and disk
access involved.
In my experience, it takes 2 generation of software to get things right.
Someone has experessed the opinion that memory consumption is escalating.
Post by Putrycz, Erik
From what I can see Qt4/KDE4 combo, uses a little less memory than the
same OS running Qt3/KDE3.
Similarly Firefox 3, is using less memory than FF2.

So actually the developers seem to be doing a good job, they're not
going the "Bloaty Vista" route, but trying to keep things efficient,
because of experience in previous generation of systems. This has
paid off in the fastest growing areas, which are the low power
consumption, lower peformance machines.

The point about memory being relatively slower than it used to be,
means cache misses are relatively greater penalties, and that the
optimisation work than went into FF3 for example, is very perceptible
even on systems with plenty of RAM. RAM size is not, the only thing
that determines good peformance, diminishing returns set in, and IMO
the general PC press haven't caught up with that yet. Good code is
important.
Hans Witvliet
2009-01-11 20:56:16 UTC
Permalink
Post by Rob OpenSuSE
Post by Putrycz, Erik
Post by Rob OpenSuSE
On old systems, bloat was causing a few megabytes of extra memory
access, now it can be 100-400MB. And it's no 11 cycles, but CPUs wait
100's on cache misses, never mind if there's a page fault and disk
access involved.
In my experience, it takes 2 generation of software to get things right.
Someone has experessed the opinion that memory consumption is escalating.
It wasn't me, but i've said it a number of times.
I know that the price of mem and cpu is constantly dropping, but that
should never be an excuse from a runaway OS-footprint.

Some applications need much mem, No problem with that.
And if you run a lot of them cocurrently, its your own fault.

One of the problems was getting seriously at the 10.1 release, was that
a number of nice applications were installed automatically. Thankfully
this unrequested eyecandy is dropped,
But it always remains a tradeoff between ease of installation for
newbees, and the hassle of detecting and removing unwanted stuff.
Even more when it is tangled with strange dependencies.

Advertising all of the (new) eye-candy is nice, bluntly installing them
is something else, not?

hw
Larry Stotler
2009-01-09 14:11:09 UTC
Permalink
Post by Stanislav Visnovsky
I hope you've sent your hardware profile via smolt. I'm pretty sure the
numbers there are the one people will look at when thinking about where to put
http://smolts.org/static/stats/stats.html
Not sure what I'm supposed to do with that site. And, the stats are
way off. 19% running 11.1 and 0 running 11.0/i586???? Seems like a
lot of people have no idea either. It looks like a Fedora specific
site anyway.
Vincent Untz
2009-01-09 14:22:21 UTC
Permalink
Post by Larry Stotler
Post by Stanislav Visnovsky
I hope you've sent your hardware profile via smolt. I'm pretty sure the
numbers there are the one people will look at when thinking about where to put
http://smolts.org/static/stats/stats.html
Not sure what I'm supposed to do with that site. And, the stats are
way off. 19% running 11.1 and 0 running 11.0/i586???? Seems like a
lot of people have no idea either. It looks like a Fedora specific
site anyway.
See http://en.opensuse.org/Hardware/Smolt

It's not fedora-specific and openSUSE started using this with 11.1, so
the figures for 11.0 are expected.

Vincent
--
Les gens heureux ne sont pas pressés.
Andreas Jaeger
2009-01-09 14:22:41 UTC
Permalink
Post by Larry Stotler
Post by Stanislav Visnovsky
I hope you've sent your hardware profile via smolt. I'm pretty sure the
numbers there are the one people will look at when thinking about where
http://smolts.org/static/stats/stats.html
Not sure what I'm supposed to do with that site. And, the stats are
way off. 19% running 11.1 and 0 running 11.0/i586???? Seems like a
lot of people have no idea either. It looks like a Fedora specific
site anyway.
You should submit your hardware information to it so that we can see what kind
of systems our users use.

It's started by Fedora but used by openSUSE as well - starting basically with
11.1, so no wonder that there's not much for 11.0.

Please read:
http://zonker.opensuse.org/2008/12/22/reminder-to-smolt-we-want-your-hardware-
profiles/
http://en.opensuse.org/Smolt

Andreas
--
Andreas Jaeger, Director Platform / openSUSE, ***@suse.de
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
Maxfeldstr. 5, 90409 Nürnberg, Germany
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
Larry Stotler
2009-01-09 15:48:30 UTC
Permalink
Post by Andreas Jaeger
You should submit your hardware information to it so that we can see what kind
of systems our users use.
Good idea
Post by Andreas Jaeger
It's started by Fedora but used by openSUSE as well - starting basically with
11.1, so no wonder that there's not much for 11.0.
Ah.. However, I can't figure out what I'm supposed to do on that
site. It's not very user friendly because I haven;t found a way to
add my info......
Bryen
2009-01-09 15:54:57 UTC
Permalink
Post by Larry Stotler
Post by Andreas Jaeger
You should submit your hardware information to it so that we can see what kind
of systems our users use.
Good idea
Post by Andreas Jaeger
It's started by Fedora but used by openSUSE as well - starting basically with
11.1, so no wonder that there's not much for 11.0.
Ah.. However, I can't figure out what I'm supposed to do on that
site. It's not very user friendly because I haven;t found a way to
add my info......
You shouldn't have to do anything. Upon installation of 11.1, and the
subsequent first update, smolt should automatically do the process for
you. That's what happened with both of my machines. Did you not see a
smolt notification when you did your first system update?
--
Bryen Yunashko
openSUSE Board Member
Larry Stotler
2009-01-09 15:57:34 UTC
Permalink
Post by Bryen
You shouldn't have to do anything. Upon installation of 11.1, and the
subsequent first update, smolt should automatically do the process for
you. That's what happened with both of my machines. Did you not see a
smolt notification when you did your first system update?
No. I've only got 1 machine with a 11.1 install, and I got TinyWM
instead of KDE3. So, I haven't bothered to fix it yet. I have a
dozen machines running linux, and since I don't intend to use 11.1 as
a production version anytime soon, I guess I won't be able to give
them my info.

That doesn't help to say the least.
Vincent Untz
2009-01-09 16:04:44 UTC
Permalink
Post by Larry Stotler
Post by Bryen
You shouldn't have to do anything. Upon installation of 11.1, and the
subsequent first update, smolt should automatically do the process for
you. That's what happened with both of my machines. Did you not see a
smolt notification when you did your first system update?
No. I've only got 1 machine with a 11.1 install, and I got TinyWM
instead of KDE3. So, I haven't bothered to fix it yet. I have a
dozen machines running linux, and since I don't intend to use 11.1 as
a production version anytime soon, I guess I won't be able to give
them my info.
You can simply run the smoltGui or smoltSendProfile command.

Vincent
--
Les gens heureux ne sont pas pressés.
Larry Stotler
2009-01-09 16:22:15 UTC
Permalink
Post by Vincent Untz
You can simply run the smoltGui or smoltSendProfile command.
Had to install it on my server here at work. It had 4 dependencies as
well, which I will have to remove now since I only needed them for
this one program 1 time.....They need to compact it more.
Boyd Lynn Gerber
2009-01-09 17:28:28 UTC
Permalink
Post by Bryen
Post by Larry Stotler
Post by Andreas Jaeger
You should submit your hardware information to it so that we can see what kind
of systems our users use.
Good idea
Post by Andreas Jaeger
It's started by Fedora but used by openSUSE as well - starting basically with
11.1, so no wonder that there's not much for 11.0.
Ah.. However, I can't figure out what I'm supposed to do on that
site. It's not very user friendly because I haven;t found a way to
add my info......
You shouldn't have to do anything. Upon installation of 11.1, and the
subsequent first update, smolt should automatically do the process for
you. That's what happened with both of my machines. Did you not see a
smolt notification when you did your first system update?
I never have seen any messages, but all my installs have been network
without internet access. The updates come from the update directory I
have on the external 512 GB drive. I plug it in nightly and update it.
--
Boyd Gerber <***@zenez.com> 801 849-0213
ZENEZ 1042 East Fort Union #135, Midvale Utah 84047
Rajko M.
2009-01-10 02:00:41 UTC
Permalink
On Friday 09 January 2009 11:28:28 am Boyd Lynn Gerber wrote:
...
Post by Boyd Lynn Gerber
I never have seen any messages, but all my installs have been network
without internet access. The updates come from the update directory I
have on the external 512 GB drive. I plug it in nightly and update it.
The smoltGui that is started by updater will not give you a password that you
need to change uploaded profile and mark what works and to what extent.

See:
http://en.opensuse.org/Smolt
for details.
--
Regards, Rajko
Kevin Dupuy
2009-01-11 23:50:47 UTC
Permalink
Post by Larry Stotler
Ah.. However, I can't figure out what I'm supposed to do on that
site. It's not very user friendly because I haven;t found a way to
add my info......
You should use the Smolt application on your computer to add your
info... if you use 11.1, you're asked to click a button on a
notification to do so upon first online update.
--
Kevin "Yeaux" Dupuy - openSUSE Member
Public Mail: <***@opensuse.org>
Merry Christmas & Happy Holidays from the Yeaux!
Rajko M.
2009-01-12 03:10:12 UTC
Permalink
Post by Kevin Dupuy
Post by Larry Stotler
Ah.. However, I can't figure out what I'm supposed to do on that
site. It's not very user friendly because I haven;t found a way to
add my info......
You should use the Smolt application on your computer to add your
info... if you use 11.1, you're asked to click a button on a
notification to do so upon first online update.
It will open Smolt GUI which will transfer information about hardware, but
without information what works and what not. To edit that you need password
which GUI doesn't provide yet.

See http://en.opensuse.org/Smolt
--
Regards, Rajko
Space Case
2009-01-12 03:48:41 UTC
Permalink
On Jan 11, 5:50pm, Kevin Dupuy wrote:
} Subject: Re: [opensuse-factory] Plan for 11.2?
Post by Kevin Dupuy
Post by Larry Stotler
Ah.. However, I can't figure out what I'm supposed to do on that
site. It's not very user friendly because I haven;t found a way to
add my info......
You should use the Smolt application on your computer to add your
info... if you use 11.1, you're asked to click a button on a
notification to do so upon first online update.
I have two systems, an i586 Intel and x86_64 AMD, both updated ~weekly
from factory since about 10.1 (using smart). I have never seen a request
to use smolt until this thread. They're both now on 11.2a0.

So I try it, and on both machines get this:

(x86-64: bunch of file traces, then)
File "/usr/lib64/python2.6/site-packages/urlgrabber/sslfactory.py", line 63, in create_opener
return m2urllib2.build_opener(self.ssl_context, *handlers)
File "/usr/lib64/python2.6/site-packages/M2Crypto/m2urllib2.py", line 112, in build_opener
if inspect.isclass(check):
NameError: global name 'inspect' is not defined

or
(i586: bunch of file traces, then)
File "/usr/lib/python2.6/site-packages/urlgrabber/sslfactory.py", line 63, in create_opener
return m2urllib2.build_opener(self.ssl_context, *handlers)
File "/usr/lib/python2.6/site-packages/M2Crypto/m2urllib2.py", line 112, in build_opener
if inspect.isclass(check):
NameError: global name 'inspect' is not defined


Happens with both gui and shell. I've not had a chance to examine
it further. Anybody have a quick clue what might be wrong?

Thanks,
~Steve
Michael Loeffler
2009-01-09 14:34:21 UTC
Permalink
Post by Larry Stotler
Post by Stanislav Visnovsky
I hope you've sent your hardware profile via smolt. I'm pretty
sure the numbers there are the one people will look at when
http://smolts.org/static/stats/stats.html
Not sure what I'm supposed to do with that site. And, the stats
are way off. 19% running 11.1 and 0 running 11.0/i586???? Seems
like a lot of people have no idea either. It looks like a Fedora
specific site anyway.
With 11.0 we had smolt but very well hidden. With 11.1 everybody is
asked to send hw info to smolt. That's why there is such a huge
difference between 11.0 and 11.1.

Smolt is at its beginning but it has the power to improve hardware
support for Linux in general as its the first place where hw info is
displayed publicly based on a very large user base.

M
--
Michael Löffler, Product Management
SUSE LINUX Products GmbH - Nürnberg - AG Nürnberg - HRB 16746 - GF:
Markus Rex
Rafa Grimán
2009-01-09 14:54:44 UTC
Permalink
Hi :)
Post by Michael Loeffler
Post by Larry Stotler
Post by Stanislav Visnovsky
I hope you've sent your hardware profile via smolt. I'm pretty
sure the numbers there are the one people will look at when
http://smolts.org/static/stats/stats.html
Not sure what I'm supposed to do with that site. And, the stats
are way off. 19% running 11.1 and 0 running 11.0/i586???? Seems
like a lot of people have no idea either. It looks like a Fedora
specific site anyway.
With 11.0 we had smolt but very well hidden. With 11.1 everybody is
asked to send hw info to smolt. That's why there is such a huge
difference between 11.0 and 11.1.
Smolt is at its beginning but it has the power to improve hardware
support for Linux in general as its the first place where hw info is
displayed publicly based on a very large user base.
Does smolt distinguish between two different computers and two installations
(reinstallation, for example) of the same computer?

I ask this because I have different scenarios:

1.- say you install openSUSE 11.1 and then you reinstall because
you goofed up. Would that count as 1 installation and smolt
would not resend the info? Or does smolt count that as 2
different systems and resends the info as if it were a
different computer?

2.- Say you were a Fedora user and you added your computer to
the smolt statistics. A couple of months later you discover
openSUSE, get rid of Fedora never to go back and install
openSUSE 11.1. Would that also be counted as a new computer?

3.- What if you have a partition with openSUSE 11.0 from which
you've already run smolt and decide to upgrade to 11.1? Is
that a new computer added to the smolt statistics?

4.- What if you have 1 partition with openSUSE 11.0 with which you
sent all the info to smolt web and you install openSUSE 11.1
on another partition and all the info gets sent back to
smolt's web. Is that counted as a new/another different
computer?

Been looking for the answers to these questions on smolt's web but haven't
found it. Maybe looking in the wrong place? Any ideas?

TIA

Rafa
--
"We cannot treat computers as Humans. Computers need love."

***@skype.com
Larry Stotler
2009-01-09 16:00:40 UTC
Permalink
Post by Rafa Grimán
Been looking for the answers to these questions on smolt's web but haven't
found it. Maybe looking in the wrong place? Any ideas?
Their wiki is broken when I try to go there as well. Oh well.
Rajko M.
2009-01-10 02:15:28 UTC
Permalink
Post by Rafa Grimán
Hi :)
...
Post by Rafa Grimán
Does smolt distinguish between two different computers and two
installations (reinstallation, for example) of the same computer?
1.- say you install openSUSE 11.1 and then you reinstall because
you goofed up. Would that count as 1 installation and smolt
would not resend the info? Or does smolt count that as 2
different systems and resends the info as if it were a
different computer?
2.- Say you were a Fedora user and you added your computer to
the smolt statistics. A couple of months later you discover
openSUSE, get rid of Fedora never to go back and install
openSUSE 11.1. Would that also be counted as a new computer?
3.- What if you have a partition with openSUSE 11.0 from which
you've already run smolt and decide to upgrade to 11.1? Is
that a new computer added to the smolt statistics?
4.- What if you have 1 partition with openSUSE 11.0 with which you
sent all the info to smolt web and you install openSUSE 11.1
on another partition and all the info gets sent back to
smolt's web. Is that counted as a new/another different
computer?
Been looking for the answers to these questions on smolt's web but haven't
found it. Maybe looking in the wrong place? Any ideas?
When you run smolt it will create unique ID in /etc/smolt .
If it is deleted them machine will be counted again.

To remedy this as much as possible they count, for statistics page, only last
90 days.

The project is still in development and ideas how to work around problems are
welcome. See: http://en.opensuse.org/Smolt for details.
--
Regards, Rajko
Andrew Wafaa
2009-01-09 14:36:46 UTC
Permalink
Post by Larry Stotler
Post by Stanislav Visnovsky
I hope you've sent your hardware profile via smolt. I'm pretty sure the
numbers there are the one people will look at when thinking about where to put
http://smolts.org/static/stats/stats.html
Not sure what I'm supposed to do with that site. And, the stats are
way off. 19% running 11.1 and 0 running 11.0/i586???? Seems like a
lot of people have no idea either. It looks like a Fedora specific
site anyway.
You're supposed to use the site to see how many registered installs
there are, what hardware is being used, etc. As a user you can submit
your hardware profile and actually add comments/tip/tricks/etc to the
hardware and also mark items as working with no issue, working with a
bit of magic or just not working [0].

There will be a huge difference between 11.0 and 11.1 as openSUSE had
only begun talks with Fedora about joining in and helping out after
11.0's release. Yes this is a project that was started by Fedora, but
we (the openSUSE project) can see the benefit in it and are happy and
willing to join in.

There have been multiple posts since last summer about Smolts [0], [1],
[2] and as with everything it takes time and effort to get the word
spread about something good, unfortunately bad things tend to be known
much much faster :-/ A good way to follow what's going on is to keep an
eye on PlanetSUSE, most interesting items pop up there ;-)

[0] =
http://www.wafaa.eu/index.php?/archives/153-Smolt-your-Hardware.html
[1] = http://zonker.opensuse.org/2008/12/11/dont-forget-to-smolt/
[2] =
http://zonker.opensuse.org/2008/12/22/reminder-to-smolt-we-want-your-hardware-profiles/
--
Andrew Wafaa, openSUSE Member: FunkyPenguin.
openSUSE: Get It, Discover It, Create It at http://www.opensuse.org
***@opensuse.org | http://www.wafaa.eu
Rob OpenSuSE
2009-01-08 20:35:33 UTC
Permalink
Post by Matt Sealey
On Wed, Dec 17, 2008 at 8:25 AM, Birger Kollstrand
Post by Birger Kollstrand
10. Default media made for USB key as well as DVD.
It actually requires a hell of a lot of setup to get a USB key
bootable, more than you could possibly do by shipping a file. I guess
a little Windows installer would be okay that shoved a bootloader on
there, and copied the NET iso stuff there (or even the full DVD) but
that kind of defeats the object of perhaps grabbing a clean system and
installing SUSE from scratch from a USB key. You'd need an existing
system to do it.. which is the computing equivalent IMO of
waterboarding.
Aw come on, it's far more sensible to install on a "clean" system,
when you've got another near by, that you can check up on things,
and/or do something useful rather than watch those installer info
screens. I think the only time I actually was preoccupied by an
install, was with COL2 in 1999, which actually invited you to play
ksirtet, whilst it did the work in a rather impressive pipelined
manner.

How many ppl without internet access via another computer, are going
to be fiddling about with USB key, rather than DVD boxset, or a Live
CD or DVD image?
Matt Sealey
2009-01-08 20:57:02 UTC
Permalink
Post by Rob OpenSuSE
Post by Matt Sealey
that kind of defeats the object of perhaps grabbing a clean system and
installing SUSE from scratch from a USB key. You'd need an existing
system to do it.. which is the computing equivalent IMO of
waterboarding.
Aw come on, it's far more sensible to install on a "clean" system,
when you've got another near by, that you can check up on things,
and/or do something useful rather than watch those installer info
screens. I think the only time I actually was preoccupied by an
install, was with COL2 in 1999, which actually invited you to play
ksirtet, whilst it did the work in a rather impressive pipelined
manner.
How many ppl without internet access via another computer, are going
to be fiddling about with USB key, rather than DVD boxset, or a Live
CD or DVD image?
Ideally most people would have a PC that does everything they need,
and now that we live in the world of the iPhone, who needs another PC
to browse the web?

Ideally PCs would boot from an HTTP URL but I've only ever seen one
system that could ever do that (OpenFirmware implementation on the
OLPC and the one in development at Genesi :)

Maybe EFI will fix it for the rest of the world so installing an OS is
as much as typing in "get.opensuse.org" or "go.windows.com" and having
it boot something.. security notwithstanding.

Mounting repositories over HTTP would be a great idea too. FUSE has
suitable filesystems.. that would save the ridiculous notion right now
where downloading over the net is a "download, unpack" affair compared
to NFS, local repo on a hard disk or .. I'd like to say DVD, but my
experience is it always "downloaded" them from CD media before
unpacking it. This sort of stuff can double install times.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Rob OpenSuSE
2009-01-08 22:23:48 UTC
Permalink
Post by Matt Sealey
Mounting repositories over HTTP would be a great idea too. FUSE has
suitable filesystems.. that would save the ridiculous notion right now
where downloading over the net is a "download, unpack" affair compared
to NFS, local repo on a hard disk or .. I'd like to say DVD, but my
experience is it always "downloaded" them from CD media before
unpacking it. This sort of stuff can double install times.
That could be pipelined though, so either the rpm install, or the
downloads happen for free in parallel with the next time consuming
part of the operation. If you're going to have to get the whole file
(and you are), then there's not really a big win, processing the
header and then blocking.
Stephan Binner
2009-01-09 08:14:09 UTC
Permalink
If you just want Marble you had to install the entirity of kdeedu.
Marble was never part of kdeedu3. </nitpick>
Now it's all seperate in 11.1.
Not gnome-games <gd&r>...
but I noticed it in KDE4 too, which is odd since 11.1 was supposed to
have "no Qt3 or GTK apps on the default desktop". If that's true why
install it? The Qt3 compatibility library was installed too.
It was more "no Qt3 based apps running by default on KDE4 desktop" and we
admittedly failed to achieve that goal (knetworkmanager). Getting there
and having no Qt3 based apps in the default install at all will be one
of our goals for next release: http://en.opensuse.org/KDE/Ideas/11.2

Bye,
Steve
Daniele
2009-01-09 18:45:31 UTC
Permalink
Post by Stephan Binner
It was more "no Qt3 based apps running by default on KDE4 desktop"
and we admittedly failed to achieve that goal (knetworkmanager).
Getting there and having no Qt3 based apps in the default install at
http://en.opensuse.org/KDE/Ideas/11.2
I hope that Quanta and K3b will be ready fo that time..
Well, Quanta is not in default installation but k3B...
Bye.
--
*** Linux user # 198661 ---_ ICQ 33500725 ***
*** Home http://www.kailed.net ***
*** Powered by openSUSE ***
Matt Sealey
2009-01-09 18:44:11 UTC
Permalink
Post by Stephan Binner
If you just want Marble you had to install the entirity of kdeedu.
Marble was never part of kdeedu3. </nitpick>
I'm sure it was still part of kdeedu4 on some distributions.
Post by Stephan Binner
Now it's all seperate in 11.1.
Not gnome-games <gd&r>...
:D
Post by Stephan Binner
but I noticed it in KDE4 too, which is odd since 11.1 was supposed to
have "no Qt3 or GTK apps on the default desktop". If that's true why
install it? The Qt3 compatibility library was installed too.
It was more "no Qt3 based apps running by default on KDE4 desktop" and we
admittedly failed to achieve that goal (knetworkmanager).
Yeah I noticed that. Shame. For 11.2 though right?
Post by Stephan Binner
and having no Qt3 based apps in the default install at all will be one
of our goals for next release: http://en.opensuse.org/KDE/Ideas/11.2
So KNetworkManager and YaST2 Control Center need moving across. What
else? I notice it says in the ideas list that "no KDE3 libs in the
default distro, except of" (this is bad english btw :) kdelibs3,
kdebase3-runtime, kdevelop3, quanta. Okay so that's all to support
kdevelop3, because kdevelop4 really isn't ready.. but does that mean
those parts are going to be installed by default, or just in the repo
as dependencies for kdevelop?

I'll throw in some feature ideas at the weekend as I have a ton of
nitpicks about KDE4 stuff in SuSE.. however I realise right now, all
my systems are sitting on GNOME which sort of sucks for verifying
stuff. Are you using that page for lengthy diatrib^H^H^H^H^H umm..
discussion? Or just single lines of "this would be good"?

Is there a GNOME version of that page?
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Stephan Binner
2009-01-12 10:23:15 UTC
Permalink
Post by Matt Sealey
Post by Stephan Binner
Marble was never part of kdeedu3. </nitpick>
I'm sure it was still part of kdeedu4 on some distributions.
http://packages.opensuse-community.org/index.jsp?distro=openSUSE_103&searchTerm=marble
Post by Matt Sealey
Post by Stephan Binner
It was more "no Qt3 based apps running by default on KDE4 desktop" and we
admittedly failed to achieve that goal (knetworkmanager).
Yeah I noticed that. Shame. For 11.2 though right?
This follows from the goal of not having Qt3 on the Live-CD at all.
Post by Matt Sealey
Post by Stephan Binner
of our goals for next release: http://en.opensuse.org/KDE/Ideas/11.2
KNetworkManager and YaST2 Control Center need moving across. What else?
Read the page? :-)
Post by Matt Sealey
but does that mean those parts are going to be installed by default, or just
in the repo as dependencies for kdevelop?
Of course latter.
Post by Matt Sealey
stuff. Are you using that page for lengthy diatrib^H^H^H^H^H umm..
discussion? Or just single lines of "this would be good"?
We will discuss them sometime in IRC meeting or on the mailing list.
Post by Matt Sealey
Is there a GNOME version of that page?
http://en.opensuse.org/GNOME/Ideas and links

Bye,
Steve
JP Rosevear
2009-01-13 15:21:03 UTC
Permalink
Post by Matt Sealey
Is there a GNOME version of that page?
http://en.opensuse.org/GNOME/Ideas

11.2 editing hasn't started yet, that should kick off with the irc
meeting this week, but feel free to start now.

-JP
--
JP Rosevear <***@novell.com>
Novell, Inc.
Matt Sealey
2009-01-10 06:00:31 UTC
Permalink
On Wed, Dec 17, 2008 at 8:25 AM, Birger Kollstrand
Post by Birger Kollstrand
Hi,
I'm wondering if there is a plan for the 11.2 distribution? It's not
when it's available but more what Novell want's to achieve with it.
I had a thought. Debian, Ubuntu and Mandriva have all adopted "dash"
(Debian Almquist Shell) as their default "sh" for booting and even
have a static version which is absolutely tiny (compare bash which has
many dependencies and the basic binary is 800k, dash statically linked
is the same size, dynamically linked is 120k or less). This does a lot
to speed up booting when used for things like udev and rc.d scripts -
on the proviso that none of these scripts require any bash-specific
scripting. The same might go for complex RPM building :]

Debian is helped here because it's policy to have strictly POSIX
compliant init scripts etc., but I don't know what the SUSE policy
is... does anything in SUSE still absolutely require bash?

On my Efika and Pegasos I've seen boot speed improvements (sorry, lost
the boot charts, and it was ~18 months ago) of something like 3-4
seconds at the *worst* case. Shaving microseconds off I might have
written off but the fact that it is something like 1/10th of the boot
time on my system makes a difference (it still won't boot in 30
seconds, but.. that's what our internal kernel-genesi project is for
:)

What about including the sreadahead patch and tools to efficiently
readahead blocks which are loaded at boot? My system actually boots
FASTER without the traditional "preload" running, plus preloading the
traditional way (entire files) soaks up a hell of a lot of
buffer/cache space, when really all you need is the blocks the boot
process touched.

What about taking a further hint from the Intel "5 second boot" guys
and further optimizing the boot process? It needn't be as frugal as
the one in the EeePC demo, it may still load a splash screen (which
takes ages but.. is pretty. Fedora's splash even got more complicated
and animated..) and do some things here and there, but the idea of a
static /dev which is used to bring the system up, get DBus and GDM/KDM
running, and THEN start hal, udev and the like is a great one. Since X
supports hotplug input devices, the mouse and keyboard will magically
start working at the point the system is ready to log in (at which
point it can start setting up networking - OR perhaps leave that for
AFTER login like Windows does).

Not looking for a 5 or even 10 seconds boot, but a 20 second one to a
usable, and almost-ready login prompt would be awesome.

The other thing would be, and this is a silly feature, why not have a
kernel-vmware or kernel-virtualbox which only included the basic
drivers which these virtual machines support, throws away the io
schedulers (since on a virtual machine with a virtual disk, io
reordering is completely pointless) and other framebuffer drivers
etc.? It's kind of silly to be throwing in every libata driver and
block driver and hundreds of PCI card and PCMCIA card drivers when the
VM is completely fixed function within a very limited number of
configurations - even VirtualBox and VMware share a bunch of emulated
hardware now. About all you need is a set of USB modules, and
everything else can be made static. When you're running on a memory
limited machine (for instance I have a couple boxes with 1GB which I
run 256MB VMs in), saving a couple megabytes on the kernel and initrd
(and boot time because nothing needs to come out of the initrd - a tip
from the 5 second boot guys) might make that tiny bit of difference,
with a view to the fancy new virtual appliance markets, getting
systems back up and running faster on failure, etc.
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Putrycz, Erik
2009-01-11 06:45:27 UTC
Permalink
Post by Matt Sealey
The other thing would be, and this is a silly feature, why not have a
kernel-vmware or kernel-virtualbox which only included the basic
drivers which these virtual machines support, throws away the io
schedulers (since on a virtual machine with a virtual disk, io
reordering is completely pointless) and other framebuffer drivers
etc.
This is a great idea, makes lots of sense.

Erik.
Marcus Meissner
2009-01-11 08:44:16 UTC
Permalink
Post by Putrycz, Erik
Post by Matt Sealey
The other thing would be, and this is a silly feature, why not have a
kernel-vmware or kernel-virtualbox which only included the basic
drivers which these virtual machines support, throws away the io
schedulers (since on a virtual machine with a virtual disk, io
reordering is completely pointless) and other framebuffer drivers
etc.
This is a great idea, makes lots of sense.
The kernel-...-base packages are targeted for this use.

Ciao, Marcus
Rob OpenSuSE
2009-01-11 16:07:25 UTC
Permalink
Post by Marcus Meissner
Post by Putrycz, Erik
Post by Matt Sealey
The other thing would be, and this is a silly feature, why not have a
kernel-vmware or kernel-virtualbox which only included the basic
drivers which these virtual machines support, throws away the io
schedulers (since on a virtual machine with a virtual disk, io
reordering is completely pointless) and other framebuffer drivers
etc.
This is a great idea, makes lots of sense.
The kernel-...-base packages are targeted for this use.
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments? It's not clear to me what "targeted
for this use" means in this context.

When I last looked at this (over a year ago), I actually found it
simpler to use a Debian install, and Virtual Box for example, liked HZ
to be 100, rather than 250 or 1,000. There may have been a few other
tweaks. Since then it's become easier to have a minimal openSUSE
install, but also to make a 'spin' of openSUSE that would be tailored
to a virtual environment.

Perhaps this would be a good area of a 'contrib' project, like the
11.1 KDE 3 LIve CD and USB.
Greg KH
2009-01-11 16:38:26 UTC
Permalink
Post by Rob OpenSuSE
Post by Marcus Meissner
Post by Putrycz, Erik
Post by Matt Sealey
The other thing would be, and this is a silly feature, why not have a
kernel-vmware or kernel-virtualbox which only included the basic
drivers which these virtual machines support, throws away the io
schedulers (since on a virtual machine with a virtual disk, io
reordering is completely pointless) and other framebuffer drivers
etc.
This is a great idea, makes lots of sense.
The kernel-...-base packages are targeted for this use.
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments?
Yes, it can be used that way.

But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)

But for KVM or Xen, this package should work. If not, please let us
know.

thanks,

greg k-h
Dominique Leuenberger
2009-01-12 08:24:31 UTC
Permalink
Post by Greg KH
Post by Rob OpenSuSE
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments?
Yes, it can be used that way.
But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)
Greg, I hope you're kidding ;)

We have open-vm-tools KMPs in the distribution, which brings all the drivers for a VMware session you possibly might need. All nicely under GPL.

Did I misunderstand your statement?

Dominique
M. Edward (Ed) Borasky
2009-01-12 15:24:42 UTC
Permalink
Post by Dominique Leuenberger
Post by Greg KH
Post by Rob OpenSuSE
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments?
Yes, it can be used that way.
But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)
Greg, I hope you're kidding ;)
We have open-vm-tools KMPs in the distribution, which brings all the drivers for a VMware session you possibly might need. All nicely under GPL.
Did I misunderstand your statement?
Dominique
I had some minor issues with open-vm-tools on 11.1 with VMware
Workstation 6.5.1 and ended up having to use the proprietary ones that
come with 6.5.1. I haven't had a chance to do the troubleshooting
necessary to file a bug yet. But neither the open-vm-tools nor the
VMware ones would do Unity mode when the guest was running 11.1 with a
Gnome desktop, both of them would do Unity mode when the guest was
running 11.1 with an XFCE desktop, but only the VMware-supplied version
would do shared files with the host.
--
M. Edward (Ed) Borasky, FBG, AB, PTA, PGS, MS, MNLP, NST, ACMC(P), WOM

I've never met a happy clam. In fact, most of them were pretty steamed.
Dominique Leuenberger
2009-01-12 16:41:39 UTC
Permalink
Post by M. Edward (Ed) Borasky
I had some minor issues with open-vm-tools on 11.1 with VMware
Workstation 6.5.1 and ended up having to use the proprietary ones that
come with 6.5.1. I haven't had a chance to do the troubleshooting
necessary to file a bug yet. But neither the open-vm-tools nor the
VMware ones would do Unity mode when the guest was running 11.1 with a
Gnome desktop, both of them would do Unity mode when the guest was
running 11.1 with an XFCE desktop, but only the VMware-supplied version
would do shared files with the host.
I would be quiet interested in this 'bug' report in this case. As I'm quiet involved in the packaging of the open-vm-tools, I have a personal
interest of offering them 'bug-free'.

So in short: Unity is the same with the open-vm-tools and the proprietary ones? Then there is probably no easy fix to them (maybe the latest
open-vm-tools could help you... you find them in OBS Virtualization:VMware)

And file sharing did not work? How did you try it? Accessing \\.host ? Mounting \\.host using vmhgfs? Drag'n'Drop of files? In which direction? From
Host to Guest or vice versa?

(All those questions I would also ask when you put it in Bugzilla.. so they are just in advance.. in case you can open a bug ticket of it :) )

Dominique
Greg KH
2009-01-12 16:24:33 UTC
Permalink
Post by Dominique Leuenberger
Post by Greg KH
Post by Rob OpenSuSE
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments?
Yes, it can be used that way.
But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)
Greg, I hope you're kidding ;)
About the virtualbox drivers? not at all.
Post by Dominique Leuenberger
We have open-vm-tools KMPs in the distribution, which brings all the
drivers for a VMware session you possibly might need. All nicely under
GPL.
Ah, sorry, I was confused, it's the vmware "host" side drivers that are
still closed source. Those we can not redistribute.

thanks,

greg k-h
Dominique Leuenberger
2009-01-12 16:36:32 UTC
Permalink
Post by Greg KH
Post by Dominique Leuenberger
Post by Greg KH
But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)
Greg, I hope you're kidding ;)
About the virtualbox drivers? not at all.
So you had quiet some fun reading them...
Post by Greg KH
Post by Dominique Leuenberger
We have open-vm-tools KMPs in the distribution, which brings all the
drivers for a VMware session you possibly might need. All nicely under
GPL.
Ah, sorry, I was confused, it's the vmware "host" side drivers that are
still closed source. Those we can not redistribute.
Actually even there I'm not so sure about the entire thing: to my understanding from several discussions (I'm active there a bit, as I'm packaging
open-vm-tools, so things like this just raise) seems to be that the host part is almost identical, with very few exceptions. The module set is more or
less the same; for Vsock there are other DEFINES used.

I'm not sure though if vmware themself is interested in having the modules shipped by default, as their concern is giving 'power of updating' out of
the hand. Now, with every release of vmware, they can ship a new module set. Having them upstreamed in the kernel would stop them from doing so (or
adding them in the 'update' folder, but then newer kernel would be ignored... I think there is no feature to just load the 'newest' driver.

So I'll start a discussion over at vmware, maybe for 11.2 we can actually get the support for the host also in (if license permits of course!)

Dominique
Greg KH
2009-01-12 16:57:19 UTC
Permalink
Post by Dominique Leuenberger
Post by Greg KH
Post by Dominique Leuenberger
We have open-vm-tools KMPs in the distribution, which brings all the
drivers for a VMware session you possibly might need. All nicely under
GPL.
Ah, sorry, I was confused, it's the vmware "host" side drivers that are
still closed source. Those we can not redistribute.
Actually even there I'm not so sure about the entire thing: to my
understanding from several discussions (I'm active there a bit, as I'm
packaging open-vm-tools, so things like this just raise) seems to be
that the host part is almost identical, with very few exceptions. The
module set is more or less the same; for Vsock there are other DEFINES
used.
Look at the license of the modules themselves that vmware offers. A
number of them recently switched to be:
MODULE_LICENSE("GPL");
but at the same time, they introduced 3 new ones that are closed. So
the end result is the same, we can't ship them :(
Post by Dominique Leuenberger
I'm not sure though if vmware themself is interested in having the
modules shipped by default, as their concern is giving 'power of
updating' out of the hand.
Yeah, that's a horrible excuse. I've been talking with them for a long
time about this, and they are finally starting to realize that they need
to change this. But they move very slowly, so don't count on any big
changes any time soon.
Post by Dominique Leuenberger
Now, with every release of vmware, they can ship a new module set.
Having them upstreamed in the kernel would stop them from doing so (or
adding them in the 'update' folder, but then newer kernel would be
ignored... I think there is no feature to just load the 'newest'
driver.
No, it wouldn't stop them from this at all. It's no different from any
other company that ships updated drivers for their products for older
kernel releases. vmware is not unique at all, despite what they keep
claiming (and wishing...)
Post by Dominique Leuenberger
So I'll start a discussion over at vmware, maybe for 11.2 we can
actually get the support for the host also in (if license permits of
course!)
If someone sends me the vmware code, under the GPL, I can get it into
the next kernel version with about a days work. It's not hard at all, I
don't know why people claim that...

thanks,

greg k-h
Matt Sealey
2009-01-12 17:08:55 UTC
Permalink
Post by Greg KH
Post by Rob OpenSuSE
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments?
Yes, it can be used that way.
But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)
But for KVM or Xen, this package should work. If not, please let us
know.
Hi Greg,

Ignoring for a second the kernel modules required to support host
access, what I was thinking was a kernel that implemented, for
example.. Intel PIIX3/4 ATA, AHCI, Fusion MPT (for VMware), Intel AC97
audio and ES1371 audio, you know.. the hardware that you can pick in
boxes from VMware or VirtualBox or whatever other emulation
environment. The hardware it mimics on it's fake PCI bus etc. :)

vmware-tools, virtualbox kernel drivers, either in weird package forms
built by script, or KMPs, are something you generally have to pull in
once the OS is installed (I've not yet seen any distribution - for
licensing reasons or otherwise - provide any virtualization host-guest
toolkit by default on the installation DVD). This is fine. But for the
basic install and the basic kernel and the built initrd, why include
40 PCI ethernet cards, 10 framebuffer drivers, ATA drivers for every
southbridge on the planet, things like PCI DVB adapters and server
hardware such as Infiniband.. which.. it simply does not provide on
it's virtual PCI bus? Since VirtualBox is based on QEMU and shares a
lot of the physical device emulation, and VirtualBox and VMware tend
to assume a certain subset of drivers are all an emulated system need
have to boot, get through and installer and bring up Windows before
host-guest tools are installed, those decisions in the design of the
emulators would mean a more efficient, leaner kernel could be built
with less drivers, less kernel build time, less boot time, far less
bothersome modules to be "autoloaded" (it's not like you can hotplug a
virtual PCI card in any of them, none of the emulators expose anything
cpufreq can use, etc.)
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Greg KH
2009-01-12 17:44:51 UTC
Permalink
Post by Matt Sealey
Post by Greg KH
Post by Rob OpenSuSE
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments?
Yes, it can be used that way.
But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)
But for KVM or Xen, this package should work. If not, please let us
know.
Hi Greg,
Ignoring for a second the kernel modules required to support host
access, what I was thinking was a kernel that implemented, for
example.. Intel PIIX3/4 ATA, AHCI, Fusion MPT (for VMware), Intel AC97
audio and ES1371 audio, you know.. the hardware that you can pick in
boxes from VMware or VirtualBox or whatever other emulation
environment. The hardware it mimics on it's fake PCI bus etc. :)
Doesn't our kernel package today offer this?

Yes, it would be nice to somehow only roll a kernel package that
contained a limited number of drivers in it, exactly what you specify.
Different people have been asking for this for a while, and with the
increase usage of openSUSE/SLED on netbooks, we are starting to work on
something like this.

So be patient, it's on the roadmap, but first we need to get SLED11 out
the door.

thanks,

greg k-h
Matt Sealey
2009-01-12 19:07:57 UTC
Permalink
Post by Greg KH
Post by Matt Sealey
Post by Greg KH
Post by Rob OpenSuSE
Do you mean there's a kernel in the kernel-base rpm, that is intented
for virtualised environments?
Yes, it can be used that way.
But note, it doesn't have the vmware drivers as they violate the GPL and
can not be redistributed, and virtualbox drivers, well, let's just say
some of us looked at that code, and ran away screaming :)
But for KVM or Xen, this package should work. If not, please let us
know.
Hi Greg,
Ignoring for a second the kernel modules required to support host
access, what I was thinking was a kernel that implemented, for
example.. Intel PIIX3/4 ATA, AHCI, Fusion MPT (for VMware), Intel AC97
audio and ES1371 audio, you know.. the hardware that you can pick in
boxes from VMware or VirtualBox or whatever other emulation
environment. The hardware it mimics on it's fake PCI bus etc. :)
Doesn't our kernel package today offer this?
Everything but Fusion MPT I think, sure.
Post by Greg KH
Yes, it would be nice to somehow only roll a kernel package that
contained a limited number of drivers in it, exactly what you specify.
Different people have been asking for this for a while, and with the
increase usage of openSUSE/SLED on netbooks, we are starting to work on
something like this.
So be patient, it's on the roadmap, but first we need to get SLED11 out
the door.
That's good to know and all I wanted to know :D
--
Matt Sealey <***@genesi-usa.com>
Genesi, Manager, Developer Relations
Continue reading on narkive:
Loading...