The document discusses using crowdsourcing for relation extraction in the medical domain, highlighting issues with traditional gold standard annotations and the potential of human annotation through crowdsourcing to improve data quality. It introduces the 'crowdtruth' method to interpret disagreements among annotators and illustrates its application in training a relation extraction classifier. Research questions focus on thresholds for positive/negative relations, comparing crowdsourced data with expert annotations, and evaluating classifier performance using crowdsourced metrics.