Wątki

[ Pobierz całość w formacie PDF ]
.Actually, it s not.In fact, it s an option to trigger write ahead.This meanson, anyone with Windows 98 will think Samba servers are horribly slow.that if Samba gets behind reading from the disk and writing to the network (orsync alwaysvice versa) by the specified amount, it will start overlapping network writes withSetting syncalwayscauses Samba to flush every write to disk.This is good ifdisk reads (or vice versa).your server crashes constantly, but the performance costs are immense.SMBThe read size doesn t have a big performance effect on Unix, unless you set its servers normally use oplocks and automatic reconnection to avoid the illvalue quite small.At that point, it causes a detectable slowdown.For this reason, it effects of crashes, so setting this option is not normally necessary.defaults to 2048 and can t be set lower than 1024.wide linksTurning off wide links prevents Samba from following symbolic links in oneread predictionfile share to files that are not in the share.It is turned on by default, since fol-Besides being counterintuitive, this option is also obsolete.It enables Samba tolowing links in Unix is not a security problem.Turning it off requires extraread ahead on files opened read only by the clients.The option is disabled inprocessing on every file open.If you do turn off wide links, be sure to turn onSamba 2.0 (and late 1.9) Because it interferes with opportunistic locking.getwdcache to cache some of the required data. 322 Appendix B: Samba Performance Tuning Sizing Samba Servers 323much more complex and would contain rules like  not more than three disks per effect, the question is: how much data can pass below the drive heads in one sec-SCSI chain.(A good book on real models is Raj Jain s The Art of Computer Sys- ond? With a single 7200 RPM disk, the example server will give us 70 I/O opera-tems Performance Analysis.*) With that warning, we present the system in tions per second at roughly 560KB/s.Figure B-2.The second possible bottleneck is the CPU.The data doesn t actually flow throughthe CPU on any modern machines, so we have to compute throughput somewhatindirectly.Data Flow from Disk to NetworkThe CPU has to issue I/O requests and handle the interrupts coming back, thentransfer the data across the bus to the network card.From much past experimenta-NICCPU tion, we know that the overhead that dominates the processing is consistently inthe filesystem code, so we can ignore the other software being run.We computethe throughput by just multiplying the (measured) number of file I/O operationsper second that a CPU can process by the same 8K average request size.Thisgives us the results shown in Table B-3.Bottleneck 1 Bottleneck 2 Bottleneck 3Table B-3.CPU ThroughputFigure B-2.Data flow through a Samba server, with possible bottlenecksCPU I/O Operations/second KB/secondThe flow of data should be obvious.For example, on a read, data flows from the Intel Pentium 133 700 5,600disk, across the bus, through or past the CPU, and to the network interface cardDual Pentium 133 1,200 9,600(NIC).It is then broken up into packets and sent across the network.Our strategySun SPARC II 660 5,280here is to follow the data through the system and see what bottlenecks will chokeSun SPARC 10 750 6,000it off.Believe it or not, it s rather easy to make a set of tables that list the maxi-Sun Ultra 200 2,650 21,200mum performance of common disks, CPUs, and network cards on a system.Sothat s exactly what we re going to do.Now we put the disk and the CPU together: in the Linux example, we have a sin-Let s take a concrete example: a Linux Pentium 133 MHz machine with a singlegle 7200 RPM disk, which can give us 560KB/s, and a CPU capable of starting 7007200 RPM data disk, a PCI bus, and a 10-Mb/s Ethernet card.This is a perfectlyI/O operations, which could give us 5600KB/s.So far, as you would expect, ourreasonable server.We start with Table B-2, which describes the hard drive thebottleneck is clearly going to be the hard disk.first potential bottleneck in the system.The last potential bottleneck is the network.If the network speed is below 100Table B-2.Disk Throughput Mb/s, the bottleneck will be the network speed.After that, the design of the net-work card is more likely to slow us down.Table B-4 shows us the averageDisk RPM I/O Operations/second KB/secondthroughput of many types of data networks.Although network speed is conven-7200 70 560tionally measured in bits per second, Table B-4 lists bytes per second to make4800 60 480comparison with the disk and CPU (Table B-2 and Table B-3) easier.3600 40 320Table B-4.Network ThroughputDisk throughput is the number of kilobytes of data that a disk can transfer per sec-Network Type KB/secondond.It is computed from the number of 8KB I/O operations per second a disk canISDN 16perform, which in turn is strongly influenced by disk RPM and bit density.InT1 197Ethernet 10m 1,113* See Jain.Raj, The Art of Computer Systems Performance Analysis, New York, NY (John Wiley and Sons),Token ring 1,5001991, ISBN 0-47-150336-3. 326 Appendix B: Samba Performance Tuning Sizing Samba Servers 327Ethernet I/O (approximately 375KB) rather than disk I/O (up to 640KB).If so, you We ve included columns for both bytes and bits in the tables [ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • mikr.xlx.pl
  • Powered by MyScript