Adrian Colyer summarizes a fascinating academic paper:

This program has a bug. When given an

already encodedinput, it encodes it again (replacing % with ‘%25’). For example, the input`https://example.com/t%20c?x=1`

results in the output`https://example.com/t%2520c?x=1`

, whereas in fact the output should be the same as the input in this case.Let’s put our probabilistic thinking caps on and try and to debug the program. We ‘know’ that the

`url`

printed on line 19 is wrong, so we can assign low probability (0.05) to this value being correct. Likewise we ‘know’ that the input`url`

on line 1 is correct, so we can assign high probability (0.95). (In probabilistic inference, it is standard not to use 0.0 or 1.0, but values close to them instead). Initially we’ll set the probability of every other program variable being set to 0.5, since we don’t know any better yet. If we can find a point in the program where the inputs are correct with relatively high probability, and the outputs areincorrectwith relatively high probability, then that’s an interesting place!Since

`url`

on line 19 has a low probability of being correct, this suggests that`url`

on line 18, and`purl_str`

at line 12 are also likely to be faulty. PI Debugger actually assigns these probabilities of being correct 0.0441 and 0.0832 respectively. Line 18 is a simple assignment statement, so if the chances of a bug here are fairly low. Now we trace the data flow. If`purl_str`

at line 12 is likely to be faulty then`s`

at line 16 is also likely to be faulty (probability 0.1176).

I’m interested to see someone create a practical implementation someday.

Kevin Feasel

2018-06-20

Machine Learning