Well, if you are working with any kind of probabilistic logic, its de-facto going to have to be paraconsistent, right? Because if you think that there's an 80% chance that P then there is a 20% chance that not P.
Perhaps there's a devil in the semantic details: If the semantics is 'measure theory', one could say that P and not(P) are disjoint sets, so (P and not(P)) never holds. But as you trundle along, making logical deductions, you have to worry about several phenomena from 'measure-preserving dynamical systems': most seriously, 'is there mixing'? viz, topological, strong, or weak mixing? Because if there is, that means that there will be regions where you have great trouble telling apart the deductions that follow from P and those that follow from not(P). (Another interesting one: 'are there wandering sets?' because it has a similar/opposite effect...) I have never seen any text that used the words 'mixing' and 'probabilistic logic' at the same time, so ...?
Another semantics is to abandon crisp truth values (i.e. to abandon the notion that something must be true or false, and that probability only demonstrates our uncertainty.) This tends to be the choice in AI work, because it handles natural language: e.g 'John is big' so if John is 6 ft tall and 200 lbs in America, this would be a statement that is simultaneously both true and false: He's bigger than some, smaller than others. If John really was this size, and I knew it, I could honestly say that "I am certain that John is big and John is not big" and so paraconsistency is unavoidable in natural language.
Are there other semantics? I dunno; I haven't studied Kripke semantics , etc.
BTW, as a practical matter, if one abandons crisp truth values, then it becomes handy to carry a second number along with probability: the confidence This allows you to talk about the probability of something being true, but if you are not confident about it, you can't proceed forward very far with logical deductions -- its pointless if the confidence of your logical deduction is zero, even if the probability you deduced was one.
The notion of confidence allows the law of the excluded middle to be more easily abandoned. Up top, the axioms of probability force you to conclude that if there's an 80% chance that P then there must be a 20% chance that not P. By contrast, if we were doing probabilistic intuitionistic logic, we prefer to say this: "there's an 80% chance that P, and we pretty much can't say anything about not P". That is, intuitionistic logic is a kind-of three-state logic system: 'true', 'false', and 'unknown'. The issue that arises is how to represent 'unknown' with a probability. Should it be 50-50 ? i.e. should I assign a likelihood of 0.5 to both P and not P, if I know nothing about P? That seems labored and awkward. Perhaps workable if you stick to a crisp truth value semantic (hoping to ultimately find a likelihood of 0 or 1). But really, as a practical mater, its easier to just say "I don't know", viz. assign a confidence of zero, and the probability of P become irrelevant.
I don't know of any discussion of the above ideas, developed from a traditional rigorous mathematical axiomatic approach. There are mathematical texts on non-monotonic logic (e.g. Goertzel etal, Pei Wang), but they don't reach back and make connections to intuitionistic logic, or paraconsistent logic (maybe I should read again?), and they don't examine any potential issues from chaotic dynamical systems.