The results are not yet clear. The additional tools that the review team provides to help differentiate “multiplication” (will the test results be the same if you repeat them with the same data?) And “repetition”(Can new experiments, combined with new data, produce similar results?).
The COS team has tried to make it clear how difficult this is. If the test fails to repeat itself, it does not mean that it is impossible. It can be a problem with repetition, not the original work. Conversely, an experiment in which one can repeat or replicate very well is not appropriate, nor is it really necessary or a textbook.
But the fact is that repetition is impossible. Even with the same genes or the same type of genetically engineered mice, different people experiment differently. Perhaps those of the review team did not have the tools to finish would have done better. Perhaps the “powerful” articles from the most popular magazines were bold, risky activities that could not be compared.
The biology of cancer plays a major role. He has to bring life-saving medicine, after all. The work that did not resemble the Errington team either did not bring in dangerous chemicals or harm patients, because Phase 2 and Phase 3 tests tend to filter out harmful seeds. According to the Biotechnology Industry Organization, Only 30 percent of drug addicts pass Phase 2 tests, and only 58 percent pass Phase 3. . agree, quietly, that most legitimate drugs do not really work — especially cancer medicine.
Science obviously works, to a large extent. So why is it difficult to repeat the experiment? “One answer is: Science is complex,” says Errington. “That’s why we invest in research and invest billions to make sure cancer research affects people’s lives. What it does.”
The point of a lesser result as a cancer project is to distinguish between what is good for science internally and what is good for science when it comes to the general public. “There are two orthogonal ideas here. One is obvious, and the other is legitimate,” says Shirley Wang, a epidemiologist at Brigham and Women’s Hospital. He is the co-founder of Render Evidence: Practices Improving and Achieving Visibility- The “Repeat” Initiative, which has done repetitive work on 150 studies that used electronic recordings as their data. (Wang’s Repeat paper has not yet been published.) “I think the problem is that we want to get together for both,” he says. “You can’t know if it’s a good science unless you know a lot about its methods and reproduction. But even if you did, it doesn’t mean it was a good science.”
Therefore, the point is not to contradict actual results. That is, for science to be transparent, which must make the results more flexible, clear, and even interpretive in a hospital setting. Currently, academic researchers do not have the resources to publish the work that other researchers may have. Encouragement is just spreading. “The prospect of success in academic research is to produce paper that is published in the highest quality magazines and with the highest volume possible,” says Begley. “For the industry, the best way to market drugs that work is to help patients. That’s why we at Amgen could not invest in a program that we knew from the beginning that it had no legs.