#1: Yes, reading blockwise into a buffer will be _many times_ faster than reading the bytes one by one.
#2,3: I think the best buffer size depends on the blocksize of the underlying filesystem - you should get the best performance if you read in chunks of the same size. IIRC this blocksize usually is 4 or 8K, but I could be wrong there.
#4: Hm no, if you're not allowed to view the file's metadata (size) directly, then you have to really _read_ the data, and that means it gets written in a buffer. There's no read&discard I know of.
#5: Yes, you could use fgets, but that might become awfully slow, too... fgets stops reading at the next newline or at EOF, so if you happen to read a file full of ASCII 0x0a's (newlines), then you'll again be reading bytes one by one. I'd suggest the (non-portable) read call - which also has another advantage, in that it doesn't return a pointer to the buffer, but the number of bytes read, so you don't have to call strlen(). |