math - How to determine error in floating-point calculations? -


i have following equation want implement in floating-point arithmetic:

equation: sqrt((a-b)^2 + (c-d)^2 + (e-f)^2)

i wondering how determine how width of mantissa affects accuracy of results? how affect accuracy of result? wondering correct mathematical approach determining is?

for instance, if perform following operations, how accuracy affected after each step?

here steps:

step 1, perform following calculations in 32-bit single precision floating point: x=(a-b), y=(c-d), z=(e-f)

step 2, round 3 results have mantissa of 16 bits (not including hidden bit),

step 3, perform following squaring operations: x2 = x^2, y2 = y^2, z2 = z^2

step 4, round x2, y2, , z2 mantissa of 10 bits (after decimal point).

step 5, add values: w = x2 + y2 = z2

step 6, round results 16 bits

step 7, take square root: sqrt(w)

step 8, round 20 mantissa bits (not including mantissa).

there various ways of representing error of floating point numbers. there relative error (a * (1 + ε)), subtly different ulp error (a + ulp(a) * ε), , relative error. each of them can used in analysing error have shortcomings. sensible results have take take account happens precisely inside floating point calculations. i'm afraid 'correct mathematical approach' lot of work, , instead i'll give following.

simplified ulp based analysis

the following analysis quite crude, give 'feel' how error end with. treat these examples only.

(a-b) operation gives 0.5 ulp error (if rounding rne). the rounding error of operation can small compared inputs, if inputs similar , contain error, left nothing noise!

(a^2) operation multiplies not input, input error. if dealing relative error, means @ least multiplying errors other mantissa. interestingly there little normalisation step in multiplier, means relative error halved if multiplication result crosses power of 2 boundary. worst case inputs multiply below that, e.g. having 2 inputs sqrt(2). in case input error multiplied 2*ε*sqrt(2). additional final rounding error of 0.5 ulp, total error of ~2 ulp.

adding positive numbers worst case here input errors added together, plus rounding error. we're @ 3*2+0.5 = 6.5 ulp.

sqrt worst case sqrt when input close e.g. 1.0. error passed through, plus additional rounding error. we're @ 7 ulp.

intermediate rounding steps take bit more work plug in intermediate rounding steps. can model these error related number of bits you're rounding off. e.g. going 23 10 bit mantissa rne introduces additional 2^(13-2) ulp error relative 23-bit mantissa, or 0.5 ulp new mantissa (you'll have scale down other errors if want work that).

i'll leave count errors of detailed example, commenters noted, rounding 10-bit mantissa dominate, , final result accurate 8 mantissa bits.


Comments

Popular posts from this blog

resizing Telegram inline keyboard -

command line - How can a Python program background itself? -

php - "cURL error 28: Resolving timed out" on Wordpress on Azure App Service on Linux -