Reframed the original question to include more background information:
My experiment concerns the recall ability of individuals concerning information about animals. Within-subject design: Whether an animal is dangerous or safe (danger). Whether an animal lives in Africa or Australia (location). In the recall phase, participants had to identify the animals they saw as dangerous/safe, African/Australian. Responses were coded as Hit (stimuli danger, response danger), Miss (stimuli danger, response Safe), correct rejection (Stimuli safe, response safe) and finally false alarm (stimuli safe, response danger) We expected that sensitivity depends on the importance of the information to wellbeing (this has been found in other studies as well), but also that bias is affected by individual differences. D’ refers to sensitivity as it is the difference between the signal (danger) and noise distribution (safe). C refers to the bias (the criterion of the respondent, a tendency to say yes or no) and c’ is c relative to d’ . However, d’ in some cases is 0, when the proportion of hits (hits/(hits+misses) is equal to the proportion of false alarms (false alarms/(fa+correct rejections)) making it impossible to compute c’ (c/d'). As such, my question is whether there is a commonly used and supported correction? More information about signal detection theory and its use in cognitive sciences can be found here
Thank you for the comments so far, I hope the question is clearer now.
Original question: As described in Detection theory: A user's guide page 28, bias indicator c can be normalized into c' by dividing it by d'. In short, d' = z(H)-Z(FA); c = -0.5(z(H)+Z(FA)); c' = c/d'. However, in some of my cases d' is 0 (For instance, H=.72, FA =.72). Is there a standard correction that allows for the calculation of c'?