Fatigue is one of the interesting things in the game, because it is something that the player has a fair amount of control over via manager instructions; basically, you'll give instructions in units of fatigue, and need to code how much better you think your players are. This isn't entirely true -- you're informing a greedy algorithm, so you may want to be conservative, since if reliever A is 5 fatigue better than reliever B, and reliever A has 4 fatigue while reliever B has 0, you still might want to pitch reliever B even in a close game, so as to let reliever A rest (otherwise reliever A may have 4 fatigue all season which is probably not optimal).

Anyway, I ran 1 season with artificial fatigue in the engine to simulate the effects of fatigue on ERA. Here were the results relative to what will be in play, with all pitchers at all times having the indicated amount more simulated fatigue:

-5: 4.40
-4: 4.50
-3: 4.55
-2: 4.79
-1: 5.32
 0: 5.26
 1: 5.67
 2: 5.70
 3: 6.09
 4: 5.87
 5: 6.26
A couple things are apparent from this table: first of all, 1 season is not enough to draw fine-grained statistical conclusions, and secondly, fatigue does matter. It's not entirely clear which of the things above are the outliers or what the best straight-line fit is (indeed, it's probably not actually linear in ERA), but maybe 0.20 ERA per fatigue point. As another data point, homefield advantage is the equivalent of two fatigue points, and home teams, in a multi-season study, had a winning percentage of .557.

Use this information as you see fit.