Some thoughts.
The thing I was disappointed in was that I had at least 5 questions based on performance charts
A little over the top, considering that the examiner needs to cover a wide range of MOS topics ?
I can't remember having to put in exact numbers. The chart type questions were always multiple choice.
You need to run the charts to get the answer to assess the multiguess options. Surely, it's not too difficult to put your answer in to answer the question rather than tick the option ?
2 or 3 of these questions were using the linear chart.
With the fallout of the Yates Report from some years ago, you are going to see very few of the old DCA "whiz around the boxes" P-charts out in the Industry, these days. The other format is very common in the US POH market so, unfortunately, you need to practice them sufficiently to ramp up the competence. At the end of the day, both formats do much the same thing and are used in much the same way so it's just a case of some practice to get on top of them. A bit like driving Holdens or Fords.
As with any charts (performance, trimsheets, etc) attention to accuracy in plotting, especially making sure you run parallel to guide lines, is quite important. Lots of practice and you will find that you can eyeball the plots for their accuracy quite easily. Again, it is just a matter of lots and lots of practice.
I think the problem was somewhere within the questions
Keep in mind, though, that WAT gradients were established/published for nil wind - eg check one of the P-charts and you will see that there is no availability to enter wind in the carpets to determine the WAT climb gradient limit. Non-WAT limit questions, though, should be looking at wind.
Obviously the margins applied are very tight
And that's quite silly from an engineering viewpoint. Very much like measuring the distance from Melbourne to Sydney with a one-foot school rule in a doomed attempt to get high accuracy. However, that's the practical consideration out in the real world of the Industry rather than the rarefied world of theory examinations where practical considerations aren't (and I see no reason why they should be) driven by practical concerns. On the other hand, though, I see little value in going to extreme and inappropriate lengths to obtain answers using a graph way beyond the precision and accuracy which went into the preparation of said graph.
The DCA P-charts' main advantage was that DCA Performance Section published the technique in several of the old tech notes with standard equations which were applied for all the P-chart exercises. The accuracy of the output can vary quite a bit, depending on who did the work and how they went about the test work. Done well, the output, generally, is more than fit for purpose but, certainly, not dead accurate by any means. Trying to figure the charts to the nearest foot is, quite frankly, silly from a practical viewpoint. The examiner, however, is not constrained by real world practicalities and, providing that the accuracy required in the exam is compatible with the equipment permitted to be used by the candidate, it really is just an examination technique consideration.
I did quite a few of the P-charts in years past as an Industry Consultant so I have a degree of understanding about the system both as we did it and DCA folk did it. Given that we all followed the DCA tech note provisions, the differences mainly were in the detail of how the tests were done and how the charts were prepared physically.
The military and other better funded groups used low tech cinetheodolites (sort of like a surveying theodolite with azimuth and elevation trace outputs, along with a video style record of tracking the aircraft (from which we could figure out tracking corrections to the marked aircraft CG. These were quite a bit down market from the high end kit used for ordnance and rocket tracking but worked real fine. I used the then RAAF kit for Nomad and UH-1 trials in the dim distant past.
The DCA approach, though, was very pragmatic and low cost. The aircraft, during the takeoff or landing tests, would have a number of still photographs taken so that its position could be estimated (quite accurately) against the runway environs background which had been accurately surveyed prior to the test program. Worked OK but, again, not super accurate when the timing was by stopwatch.
In recent years, of course, accuracy in testing has gone up in leaps and bounds. The OEMs usually run very precise computer model simulations which are then proved by the test program using fancy kit such as DGPS and the like. As an example, when the runway width test requirements came in for Australia, I did several aircraft using long lens videos for measuring lateral deviations with more than acceptable results. Boeing did the 737 using all sorts of high end tech gear for little effective improvement on the cheap approach.
I'd like to be assured that CASA's answer is absolutely correct in the first place!
Two thoughts here -
(a) if we are looking at the physical accuracy of matching the aircraft to the graph, it should be reasonable and fit for purpose but it sure enough is not in the realm of precision metrology.
(b) if we are looking at the DCA P-charts, specifically, we can use the equations which went into their preparation to give you an answer to whatever precision and accuracy you might choose. I have the relevant tech notes tucked away somewhere in the dusty filing cabinets - perhaps I should dig them out one day and reverse engineer the exam booklet P-charts ?