FirebirdSQL/問答篇/fbServer使用CPU100%的問題

出自VFP Wiki

(修訂版本間差異)
跳轉到: 導航, 搜尋
(revert)
(revert)
 
(8個中途的修訂版本沒有顯示)

在2005年4月21日 (四) 10:53的最新修訂版本

> I am using firebird 1.5 releases 8 and 9 in an athlon > 1.8 Mhz with 512 Mb Ram ,running linux Red Hat 9, > with aproximately 64 tables and 109 stored_procedures > the fbserver process will use 100% of CPU when the > database file is over 600 Mb .


  • Sean(sleyne@atkin.com)

There are many possible reasons, so here are some questions to consider.


How many rows are you counting? How many rows are in the table?


Have you reviewed the database statistics?

What is the DB page size?

What is the size of the page cache?

What is the sweep interval?

What is the difference between the Oldest Transaction and Next Transaction?

What is the difference between the Oldest Active Transaction and Next Transaction?

  • iblist@thorsoftware.com.br

Hi

I have a 1 GB database running on RedHat 8 and FB 1.5 all works fine. Sub-second response time with joins between 900K records and 1.5M records and a some other small (15K records, 500 records, etc) tables.

I did a test and got 0.02 seconds to retrieve 8 rows joinning a 1.5M records table, with a 900 K records table and a 43 records table

If I do a select count(*) on the 1.5M records table, then I think I will wait a bit :-)

Just tried rigth now,

17.0235 seconds (1.512.355 records)

This is a weak server (Duron 1.4GHz 128 RAM, IDE disc), but right now I think just me was quering the DB.

Looking on top's screen on as ssh session to the server, I can see FB using up to 42% of CPU.

I did a select count(*) on another table with 994.238 records, and I have not used that table, so no pages should be on cache FB CPU usage peaks at 47% execution time 8 secs.

When you mention that a select count(1) from SomeTable, the server goes to 100% is expected (on a 300k records table), since FB needs to visit every record (and therefore every page that holds that records), you could expect a lot of disk activity and some processing power to be used.

When the server touchs a record it tries to clean the back versions (if they exists), this could cause some delay too...

Have no more thoughts about it....

see you !