Self-Introduction: Denis Ovsienko and /etc/net project
by Denis Ovsienko
Denis Ovsienko
Russia, Moscow
Linux system administrator and developer
ValueCommerce/Russia
I develop /etc/net project (http://etcnet.org) and my goal is to integrate it
into Fedora Core.
I am a member of ALTLinux Team. /etc/net is already integrated into ALTLinux
development tree and should soon be seen in 3.0 version.
I know that ArchLinux has /etc/net in its repository. IDMS Linux did so too,
but i haven't heard from them for last months.
My skills include 6 years Linux experience, several programming
languages, 5 years of mixed software development and system/network
administration and so on, but I guess it's not related much to my goal now.
I have reviewed current initscripts buglist.
Some bugs are not bugs in /etc/net:
#65114 RFE: ifup-aliases iproute support, ifup/ifup-aliases scop...
#75390 it would be nice to tie bandwidth shaping into the networ...
#129820 initscripts maclist patch
#132252 Request for addition of routing rule config file
#132912 No additional IP addresses at ethX without aliased devices
#132925 initscripts use old ifconfig instead of iproute2
#154348 Adds support for WPA (Wi-Fi Protected Access) to the ifup...
#168990 No ifup-gre/ifdown-gre scripts.
#170884 MTU of ethernet card can't be set before interface is up
#171763 Enhancement to initscripts
Some bugs gave me ideas how to improve /etc/net:
#59114 .d-style scripts for ifup/ifdown
#119952 RFE: Add hook for "local" network initialization
#124045 Support setting a metric on interface routes
The whole process, if we don't face some unexpected problems, should take
3 to 6 months. What I need:
1. Ability to advocate patches (sometimes heavy) to about 10-20 FC packages.
2. Probably some help with documentation.
How can we start?
pub 1024D/6D1844F2 2002-11-11
Key fingerprint = AF2F DDAE 7EB3 4699 09FF F0FC 00B1 6D41 6D18 44F2
uid Denis Ovsienko <linux(a)pilot.org.ua>
uid Denis Ovsienko (http://pilot.org.ua) <pilot(a)altlinux.ru>
sub 2048g/57B7ACBE 2002-11-11
--
DO4-UANIC
13 years, 4 months
autoconf breakage on x86_64.
by Sam Varshavchik
I don't know the right way to fix this, but something is definitely broken;
and something needs to be fixed, one way or the other. The question is what
exactly needs to be fixed.
Consider something like this:
LIBS="-lresolv $LIBS"
AC_TRY_LINK_FUNC(res_query, AC_MSG_RESULT(yes), AC_MSG_RESULT(no))
Here's what happens on x86_64:
gcc -o conftest -g -O2 -Wall -I.. -I./.. conftest.c -lresolv >&5
/tmp/ccW7EeDX.o(.text+0x7): In function `main':
/home/mrsam/src/courier/authlib/configure:5160: undefined reference to
`res_query'
collect2: ld returned 1 exit status
configure:5147: $? = 1
configure: failed program was:
[ blah blah blah ]
| /* We use char because int might match the return type of a gcc2
| builtin and then its argument prototype would still apply. */
| char res_query ();
| int
| main ()
| {
| res_query ();
| ;
| return 0;
| }
The same exact test on FC1 x86 will work.
The reason appears to be that you have to #include <resolv.conf> on x86_64
in order to succesfully pull res_query() out of libresolv.so. You don't
need to do this on x86, and the test program generated by AC_TRY_LINK_FUNC
does not include any headers, but uses a manual prototype.
So, what now?
14 years, 5 months
Zeroconf in FC5?
by Daryll Strauss
I saw zeroconf in action at a Mac based facility a while back and I have
to say I was impressed. It makes their networking setup very easy. The
biggest downside was that they knew very little about how their network
actually worked. That made my life integrating a Linux system in to
their environment much more difficult. So, I'd like to see zeroconf
really integrated in to FC5. I think it'll make network setup for a lot
of users much easier.
For those who don't know, zeroconf provides several functions:
*) Dynamically allocating an IP address to a system when it boots
(without requiring a DHCP server)
*) Translate between names and IP addresses (without any setup or
directory server)
*) Allows for the publishing and discovery of services such as DNS,
NFS, ftp, http, printers, whatever (without requiring any setup or
directory server)
*) Allocates multicast addresses (without a MADCAP server). (This part
isn't yet supported and I'm not sure I know what it means :))
Fedora ships with Howl which looks to be the framework for doing
zeroconf. It seems that what's needed is integrating howl in to all the
appropriate places.
- |Daryll
14 years, 7 months
Reporting bugs upstream
by Bart Vanbrabant
Hello,
When I find a bug in fedora and it's obviously a bug upstream I report
it there. But do we have to report them in fedora bugzilla too and
reference the upstream bugreport? So if an other users files a bugreport
for the same bug and doesn't check upstream maintainer of the module
know it's already filed upstream. Or do we have to notify the maintainer
in other way?
I've searched the fedora wiki for this but I can't find any information
about this.
thanks,
Bart
--
Bart Vanbrabant <bart.vanbrabant(a)zoeloelip.be>
PGP fingerprint: 093C BB84 17F6 3AA6 6D5E FC4F 84E1 FED1 E426 64D1
14 years, 8 months
New /etc/services for testing available
by Phil Knirsch
Hi folks.
I'd like to get some feedback on a hugely updated /etc/services i've
done today.
It basically merges the old /etc/services with almost all current
official IANA services.
I've tried to make sure the file is sane and in order, but due to the
huge amount of new services i'd like to give it some selected exposure
and feedback before i think about putting it into our setup package.
The file can be downloaded from here:
http://people.redhat.com/pknirsch/services
Just dump it as a replacement in /etc (maybe making a copy of the old
/etc/services first, but shouldn't be necessary as the first part of the
file is identical to our old one).
I'd especially like to get some feedback from people using it on network
servers or monitoring machines where apps like nmap, tcpdump and
ethereral are run. My main interest is if any errors pop up and if
anyone gets performance problems (because for worst case scenarios it
can now take up to 15 times as long for getservbyname() to complete).
Thanks in advance,
Read ya, Phil
--
Philipp Knirsch | Tel.: +49-711-96437-470
Development | Fax.: +49-711-96437-111
Red Hat GmbH | Email: Phil Knirsch <phil(a)redhat.de>
Hauptstaetterstr. 58 | Web: http://www.redhat.de/
D-70178 Stuttgart
Motd: You're only jealous cos the little penguins are talking to me.
14 years, 11 months
Re: Re: gphoto2 problem: usb access for non root users
by Gianluca Cecchi
On Tue, 31 Jan 2006 22:04:59 +0100 Gianluca Cecchi wrote:
>> Asap I'll reboot ala windows and see if anything changes... ;-(
Actually I did:
- init 3
- yum update of other latest packages
- init 5
- re-login in gnome as gcecchi
- plug digicamera
- all ok now!
Gianluca
14 years, 12 months
Yum and SRPMs
by n0dalus
Hi,
I was wondering if is or ever will be possible to install srpms using
yum. In the yum manpage it says you can specify a package using
'package.arch', so I was wondering if that could be done with
'package.src'. I think this would really be a helpful feature. At the
moment to get a srpm I have to go to one of the ftp mirrors, find the
right folder then search for the right package among hundreds of other
files.
n0dalus.
15 years
Fedora in need of testers?
by Arthur Pemberton
I got the feeling from looking aroung the mailing lists that fedora is in
need of testers. I am willing to use my FC4 on my Compaq Preasio 2210US to
do some testing. I have a desktop with FC4 installed but it is my primary
OS/machine, so I rather not play around on it.
What would be required of a potential tester?
--
As a boy I jumped through Windows, as a man I play with Penguins.
15 years
iptables problems after kernel 1871
by Gianluca Cecchi
after upgrade to 1871 kernel and iptables 1.3.4-3 I have problems with iptables.
Actually I made a localupdate when 1869 and 1871 were in place. I
didn't notice that yum proposed to deinstall 1869 and reinstall 1871.
I had these messages when overwriting my running kernel:
[root@fedora fedora]# less /tmp/yum.log
Installing: kernel ####################### [18/70]
/var/tmp/rpm-tmp.64393: line 1: 3235 Segmentation fault
/usr/sbin/module_upgrade 2.6.15-1.1871_FC5
[snip]
Installing: kernel-smp ####################### [25/70]
/var/tmp/rpm-tmp.86392: line 1: 3829 Segmentation fault
/usr/sbin/module_upgrade 2.6.15-1.1871_FC5smp
[snip]
Cleanup : pirut ####################### [64/70]
Could not parse file '/usr/share/applications/redhat-ekiga.desktop':
Failed to open file '/usr/share/applications/redhat-ekiga.desktop': No
such file or directory
Removing : openh323 ####################### [65/70]
Now iptables give these errors:
[root@fedora fedora]# service iptables restart
Flushing firewall rules: iptables: loop hook 0 pos 0 00000021.
iptables: Too many levels of symbolic links
iptables: loop hook 0 pos 0 00000021.
iptables: Too many levels of symbolic links
[FAILED]
Setting chains to policy ACCEPT: nat iptables: Invalid argument
[FAILED]
Unloading iptables modules: Removing netfilter NETLINK layer.
[ OK ]
Applying iptables firewall rules: ip_tables: (C) 2000-2006 Netfilter Core Team
Netfilter messages via NETLINK v0.30.
ip_conntrack version 2.4 (8191 buckets, 65528 max) - 232 bytes per conntrack
iptables-restore v1.3.4: Can't set policy `POSTROUTING' on `ACCEPT' line 4: Bad
built-in chain name
[FAILED]
Any hint? No changes made at /etc/sysconfig/iptables file.
[root@fedora fedora]# rpm -q --changelog iptables|head -20
* Tue Jan 24 2006 Thomas Woerner <twoerner(a)redhat.com> 1.3.4-3
- added important iptables header files to devel package
* Fri Dec 09 2005 Jesse Keating <jkeating(a)redhat.com>
- rebuilt
15 years
Stateless linux and an idea of mine for SMALL networks without servers
by Nic
Disclaimer: I hope that this is the right mailing list, but I really
wanted to reach developers would can say if it's feasible and what to
use for that.
Anyway, I was fighting the usual problems with networks and came up with the
following dream to make my life easier as a network admin. Basically I
am tired of fixing things, of worrying if a hard disk will die, of
having to deal with data access, backup etc...
I was thinking this through by looking at how most of my coworkers,
friends, and small offices use their PCs in day-to-day operations and
applying that work flow to a solution.
Before, somebody screams, this means little or moderate daily data
generation so that, basically, a laptop drive could hold the entire
company's data. (This may require email-purge rules and other things
like that).
Anyway here it is: Basically, I suggest that (almost) all PCs in the
network be laptops with the exact same image. Furthermore, they
replicate their HD continuously (with possibly some delay).
This certainly applies to the user data and application. It may not
be necessary for the OS
I am not sure of the technology to use here, but I thought something
deriving from the P-to-P technology, some distributed file system,
some database replication technology or even freenet could be a
good base.
Since every laptop will contain ALL the data for the
whole network, every laptop uses hardware encryption
at the hard disk level using an external dongle/card/whatever to limit
the risk when a laptop is lost or stolen.
Additionally, every login ALSO uses a dongle/card for access to their
account. This makes it (almost) impossible for somebody stealing a
laptop to get access to the whole data. Additionally, it makes it
(alomst) impossible for somebody to fet to other people's data. If a
system dies, you just get a new one and sync it up. However, one main
idea is that you always have EVERYTHING you need right where you are,
no matter WHERE you are. Also, there is no UPS to worry about.
Communication between PCs could be implemented using VPN/IPSec or
whatever other protected mechanism. Internet access would have to be
"sandboxed", but UNIX based OSes allow for that easily. That's the
gist of it. A lot of things can be configured in many ways, but the
whole point here is to simplify people's life.
Look at it from a disaster recovery: a lot of people bring their laptop home.
Even if the company's building burns down as well as a few employees
homes, one surviving laptop is enough to bring the business back online.
Pros:
* seamless company disaster recovery
* seamless personal computer loss recovery (you lose everything since
the last sync)
* you can use ANY laptop and find YOUR own environment and files
* no central server/single point of failure
* no UPS except for the internet firewall (this comes from the PCs
being laptops)
Cons:
* sync across a lot of PCs might be tricky and needs to be tuned.
Maybe randonly select one as master like the SMB Master browser
election works?
* each laptop needs to have enough space for the whole company's data
* maybe not appropriate for disk intensive applications (video capture...)
I wanted to post it here for other people to use if they think it's a
good idea. (and also to preempt any proprietary company from saying
"me first")
It seems that Windows Vista is coming with some automatic
synchronization across two PCs so, that's one step towards it, but we
have different goals.
I posted this somewhere and somebody pointed me toward stateless linux
and it seems pretty cool and close to what I was thinking of. I'll
look smoe more into it, but does anybody see this as useful for VERY
SMALL networks? (I already got bashed by enterprise admins sneernig at
people who don't want a rack server, so if that's your intent, just
reply "me too").
Feel free to comment (I know people will).
Nick
15 years