Why Continuous Entropy Gets Weird and How to Fix It
Entropy is often explained as a "number for uncertainty"—like how many yes/no questions you'd need, on average, to figure out an outcome. With continuous variables (like a measured time, position, or voltage), entropy starts acting strangely: the value can change just because we change variables, the math can hide unit issues (seconds, meters, etc.) inside a logarithm, and in some setups the entropy can even come out negative. In this project, I show that these "bugs" are not really mistakes in entropy itself—they happen because we forgot to include a rule for how to count possible states in a continuous world. The key idea is a density of states (a reference measure), written g(x), which acts like a built-in 'ruler' telling us how many distinguishable states live inside each small interval dx. In this view, the usual probability density can be written as : P(x)=g(x)p(x) 'states per dx' times 'probability per state.' With this extra ingredient, the entropy expression becomes more consistent and the weird behavior becomes understandable.
Keywords: entropy, density of states, information entropy, continuous distribution, Bayesian inference
Topic(s):Physics
Statistics
Mathematics
Presentation Type: Oral Presentation
Session: TBA
Location: TBA
Time: TBA