ID |
Date |
Icon |
Author |
Author Email |
Category |
OS |
ELOG Version |
Subject |
451
|
Wed Nov 12 12:34:02 2003 |
| Stefan Ritt | stefan.ritt@psi.ch | Bug fix | Linux | 2.3.9 | Re: speed is very slow if logbook contains many entries | This is a very nice measurement you made and helps me a lot. I will
incorporte your suggestions into the next version. Under Windows however,
there is no -HUP signal, so that won't work for them. But what I can easily
do is to check for new configuration/language file once every access, not
once every loc() or getcfg(). I till think about.
Thanks again,
Stefan |
455
|
Thu Nov 20 17:55:57 2003 |
| Stefan Ritt | stefan.ritt@psi.ch | Bug fix | Linux | 2.3.9 | Re: speed is very slow if logbook contains many entries | I implemented the new scheme where
- under Windows, the configuration is only checked once every access
- under Unix, the configuration is read initially, and on every -HUP signal
This should speed up the server considerably. The next bottleneck is the
rsputs2() function, which requires quite some computing power in order to find
any "http://", "//", etc. strings in every output. If anybody knows a
more clever way of coding that, please let me know.
The new version is under CVS. |
457
|
Mon Nov 24 10:25:10 2003 |
| Etienne Van Caillie | etienne.vancaillie@mba.be | Bug fix | Linux | 2.3.9 | Re: speed is very slow if logbook contains many entries | > I implemented the new scheme where
>
> - under Windows, the configuration is only checked once every access
>
> - under Unix, the configuration is read initially, and on every -HUP signal
>
> This should speed up the server considerably. The next bottleneck is the
> rsputs2() function, which requires quite some computing power in order to find
> any "http://", "//", etc. strings in every output. If anybody knows a
> more clever way of coding that, please let me know.
>
> The new version is under CVS.
may be use the logic in the 'format' attribute
like 'email', http, ftp
so elog will test only on these attributes |
619
|
Tue Jul 27 18:33:52 2004 |
| Fred Hooper | fhooper@sushisoft.com | Bug fix | Linux | 2.5.3 | speeding up elog : gcc compile optimizations | Elog is a great program, but it can be slow.
I noticed that the gcc compiler options in the tarball Makefile were not
conducive to speed. There, I tried changing the gcc options to:
CFLAGS = -O3 -funroll-loops -fomit-frame-pointer -W -Wall
for version 2.5.3, the compile worked, and the program appears to work as
normal, but a bit faster. I have not benchmarked it, but I think it should
offer a nominal increase in speed.
In particular, I removed the "-g" profiling option, which is not needed for
production code, and can be safely removed. In addition, I put in slightly
aggressive optimization settings, so if this doesn't work for you, you can
first try removing the -f setting, and then backing off the optimization to -O2.
Other may want to post other settings that work for them. |
622
|
Wed Jul 28 15:03:17 2004 |
| Stefan Ritt | stefan.ritt@psi.ch | Bug fix | Linux | 2.5.3 | Re: speeding up elog : gcc compile optimizations | > I noticed that the gcc compiler options in the tarball Makefile were not
> conducive to speed. There, I tried changing the gcc options to:
>
> CFLAGS = -O3 -funroll-loops -fomit-frame-pointer -W -Wall
Thank your for this hint, I changed my Makefile for the production code. However, I
could not feel any difference between the two options. The real problem is the
function getcfg(), which gets called many thousand times internally and has to parse
elogd.cfg each time. Once I implement a hash table for that function, elogd should
become faster by at least a factor of two. |
640
|
Sat Jul 31 16:55:21 2004 |
| Fred Hooper | fhooper@sushisoft.com | Bug fix | Linux | 2.5.3 | Re: speeding up elog : gcc compile optimizations | > > I noticed that the gcc compiler options in the tarball Makefile were not
> > conducive to speed. There, I tried changing the gcc options to:
> >
> > CFLAGS = -O3 -funroll-loops -fomit-frame-pointer -W -Wall
>
> Thank your for this hint, I changed my Makefile for the production code. However, I
> could not feel any difference between the two options. The real problem is the
> function getcfg(), which gets called many thousand times internally and has to parse
> elogd.cfg each time. Once I implement a hash table for that function, elogd should
> become faster by at least a factor of two.
Yeah - What's up with that?
I have seen this discussed before - Seems like it should be a priority to get this
fixed, as doing a hash table is straightforward, and the speed increase should be pretty
health - there are several c libraries available - check out "man 3 hsearch" for the
POSIX hash table management that already available. Other c library searches that you
could use include bsearch (binary tree), tsearch (tree searching), btree (b+ tree).
However, the easiest and most obvious one to use for elog appears to be a simple hash
table search (hsearch).
Is there something else which is making this difficult to do? |
641
|
Mon Aug 2 09:05:48 2004 |
| Stefan Ritt | stefan.ritt@psi.ch | Bug fix | Linux | 2.5.3 | Re: speeding up elog : gcc compile optimizations | > Is there something else which is making this difficult to do?
Not really, but hsearch() & Co. are not available under Windows, so I have to extract the
source code from the GNU C libarary or so. Since the last discussion I had lots of other
topics on my to-do list, such as mirroring and cloning, but the speed issue is getting more
and more up on the priority list. |
653
|
Tue Aug 3 16:59:36 2004 |
| Drew | drew.card@gmail.com | Bug fix | Linux | Windows | 2.5.3 | Re: speeding up elog : gcc compile optimizations | > > Is there something else which is making this difficult to do?
>
> Not really, but hsearch() & Co. are not available under Windows, so I have to extract the
> source code from the GNU C libarary or so. Since the last discussion I had lots of other
> topics on my to-do list, such as mirroring and cloning, but the speed issue is getting more
> and more up on the priority list.
Speaking of windows I'd like to note that when I moved my call tracking config from a slow BSD
system (PPro 200Mhz) to a faster windows system (P3 733M) I noted a huge slow down in the
interface. Talking about perhaps 1-2 seconds before to 10-15 seconds after. Using
sysinternals file monitor I see that elogd is hammering each log file in the directory. Not
sure what else is going on. 309 log files - only 1.25Meg.
Anything I can do short of pruning down the files?
[Edit: In both cases above my default view is filtered and sorted - so that I only see things
with a specific status. Taking away the filtering resolves this hit - but does not explain the
speed difference between platforms.]
-D |
|