Sounds like there's two components to this: One is identifying/extracting the papers, the other is mining the text information. There's probably a lot of options for the former, whereas the text mining part will likely require more customization from your part.
I know of this project (https://github.com/neurosynth/ACE) which was designed to extract coordinate information from neuro imaging papers. You could modify the code to extract plant and microorganism information instead.
Otherwise you could do a customized pubmed PubMed search and get a list of papers to apply some text mining.
For performing the actual text mining, search if anyone ever attempted that, or even something similar. This is a bit old and I don't know the journal, but this seems relevant: https://www.degruyter.com/view/j/jib.2011.8.issue-2/biecoll-jib-2011-184/biecoll-jib-2011-184.xml, maybe this too.
Consider how much you want to automate this process versus doing manual curation. For example, is it enough for you to have automation of relevant papers and look into them manually or with some scripts, or are you hoping the relevant information will be summarized as well? That'll determine how much you should invest in the text mining part.