Code
stringlengths
103
85.9k
Summary
listlengths
0
94
Please provide a description of the function:def recordParser(paper): tagList = [] doneReading = False l = (0, '') for l in paper: if len(l[1]) < 3: #Line too short raise BadWOSRecord("Missing field on line {} : {}".format(l[0], l[1])) elif 'ER' in l[1][:2]: #Reached the end of the record doneReading = True break elif l[1][2] != ' ': #Field tag longer than 2 or offset in some way raise BadWOSFile("Field tag not formed correctly on line " + str(l[0]) + " : " + l[1]) elif ' ' in l[1][:3]: #the string is three spaces in row #No new tag append line to current tag (last tag in tagList) tagList[-1][1].append(l[1][3:-1]) else: #New tag create new entry at the end of tagList tagList.append((l[1][:2], [l[1][3:-1]])) if not doneReading: raise BadWOSRecord("End of file reached before ER: {}".format(l[1])) else: retdict = collections.OrderedDict(tagList) if len(retdict) == len(tagList): return retdict else: dupSet = set() for tupl in tagList: if tupl[0] in retdict: dupSet.add(tupl[0]) raise BadWOSRecord("Duplicate tags (" + ', '.join(dupSet) + ") in record")
[ "This is function that is used to create [Records](../classes/Record.html#metaknowledge.Record) from files.\n\n **recordParser**() reads the file _paper_ until it reaches 'ER'. For each field tag it adds an entry to the returned dict with the tag as the key and a list of the entries as the value, the list has each line separately, so for the following two lines in a record:\n\n AF BREVIK, I\n ANICIN, B\n\n The entry in the returned dict would be `{'AF' : [\"BREVIK, I\", \"ANICIN, B\"]}`\n\n `Record` objects can be created with these dictionaries as the initializer.\n\n # Parameters\n\n _paper_ : `file stream`\n\n > An open file, with the current line at the beginning of the WOS record.\n\n # Returns\n\n `OrderedDict[str : List[str]]`\n\n > A dictionary mapping WOS tags to lists, the lists are of strings, each string is a line of the record associated with the tag.\n " ]
Please provide a description of the function:def writeRecord(self, infile): if self.bad: raise BadWOSRecord("This record cannot be converted to a file as the input was malformed.\nThe original line number (if any) is: {} and the original file is: '{}'".format(self._sourceLine, self._sourceFile)) else: for tag in self._fieldDict.keys(): for i, value in enumerate(self._fieldDict[tag]): if i == 0: infile.write(tag + ' ') else: infile.write(' ') infile.write(value + '\n') infile.write("ER\n")
[ "Writes to _infile_ the original contents of the Record. This is intended for use by [RecordCollections](./RecordCollection.html#metaknowledge.RecordCollection) to write to file. What is written to _infile_ is bit for bit identical to the original record file (if utf-8 is used). No newline is inserted above the write but the last character is a newline.\n\n # Parameters\n\n _infile_ : `file stream`\n\n > An open utf-8 encoded file\n " ]
Please provide a description of the function:def getInstitutions(self, tags = None, seperator = ";", _getTag = False): if tags is None: tags = [] elif isinstance(tags, str): tags = [tags] for k in self.keys(): if 'institution' in k.lower() and k not in tags: tags.append(k) return super().getInvestigators(tags = tags, seperator = seperator, _getTag = _getTag)
[ "Returns a list with the names of the institution. The optional arguments are ignored\n\n # Returns\n\n `list [str]`\n\n > A list with 1 entry the name of the institution\n " ]
Please provide a description of the function:def medlineRecordParser(record): tagDict = collections.OrderedDict() tag = 'PMID' mostRecentAuthor = None for lineNum, line in record: tmptag = line[:4].rstrip() contents = line[6:-1] if tmptag.isalpha() and line[4] == '-': tag = tmptag if tag == 'AU': mostRecentAuthor = contents if tag in authorBasedTags: contents = "{} : {}".format(mostRecentAuthor, contents) try: tagDict[tag].append(contents) except KeyError: tagDict[tag] = [contents] elif line[:6] == ' ': tagDict[tag][-1] += '\n' + line[6:-1] elif line == '\n': break else: raise BadPubmedRecord("Tag not formed correctly on line {}: '{}'".format(lineNum, line)) return tagDict
[ "The parser [`MedlineRecord`](../classes/MedlineRecord.html#metaknowledge.medline.MedlineRecord) use. This takes an entry from [medlineParser()](#metaknowledge.medline.medlineHandlers.medlineParser) and parses it a part of the creation of a `MedlineRecord`.\n\n # Parameters\n\n _record_ : `enumerate object`\n\n > a file wrapped by `enumerate()`\n\n # Returns\n\n `collections.OrderedDict`\n\n > An ordered dictionary of the key-vaue pairs in the entry\n " ]
Please provide a description of the function:def writeRecord(self, f): if self.bad: raise BadPubmedRecord("This record cannot be converted to a file as the input was malformed.\nThe original line number (if any) is: {} and the original file is: '{}'".format(self._sourceLine, self._sourceFile)) else: authTags = {} for tag in authorBasedTags: for val in self._fieldDict.get(tag, []): split = val.split(' : ') try: authTags[split[0]].append("{0}{1}- {2}\n".format(tag, ' ' * (4 - len(tag)),' : '.join(split[1:]).replace('\n', '\n '))) except KeyError: authTags[split[0]] = ["{0}{1}- {2}\n".format(tag, ' ' * (4 - len(tag)),' : '.join(split[1:]).replace('\n', '\n '))] for tag, value in self._fieldDict.items(): if tag in authorBasedTags: continue else: for v in value: f.write("{0}{1}- {2}\n".format(tag, ' ' * (4 - len(tag)), v.replace('\n', '\n '))) if tag == 'AU': for authVal in authTags.get(v,[]): f.write(authVal)
[ "This is nearly identical to the original the FAU tag is the only tag not writen in the same place, doing so would require changing the parser and lots of extra logic.\n " ]
Please provide a description of the function:def quickVisual(G, showLabel = False): colours = "brcmykwg" f = plt.figure(1) ax = f.add_subplot(1,1,1) ndTypes = [] ndColours = [] layout = nx.spring_layout(G, k = 4 / math.sqrt(len(G.nodes()))) for nd in G.nodes(data = True): if 'type' in nd[1]: if nd[1]['type'] not in ndTypes: ndTypes.append(nd[1]['type']) ndColours.append(colours[ndTypes.index(nd[1]['type']) % len(colours)]) elif len(ndColours) > 1: raise RuntimeError("Some nodes do not have a type") if len(ndColours) < 1: nx.draw_networkx_nodes(G, pos = layout, node_color = colours[0], node_shape = '8', node_size = 100, ax = ax) else: nx.draw_networkx_nodes(G, pos = layout, node_color = ndColours, node_shape = '8', node_size = 100, ax = ax) nx.draw_networkx_edges(G, pos = layout, width = .7, ax = ax) if showLabel: nx.draw_networkx_labels(G, pos = layout, font_size = 8, ax = ax) plt.axis('off') f.set_facecolor('w')
[ "Just makes a simple _matplotlib_ figure and displays it, with each node coloured by its type. You can add labels with _showLabel_. This looks a bit nicer than the one provided my _networkx_'s defaults.\n\n # Parameters\n\n _showLabel_ : `optional [bool]`\n\n > Default `False`, if `True` labels will be added to the nodes giving their IDs.\n " ]
Please provide a description of the function:def graphDensityContourPlot(G, iters = 50, layout = None, layoutScaleFactor = 1, overlay = False, nodeSize = 10, axisSamples = 100, blurringFactor = .1, contours = 15, graphType = 'coloured'): from mpl_toolkits.mplot3d import Axes3D if not isinstance(G, nx.classes.digraph.DiGraph) and not isinstance(G, nx.classes.graph.Graph): raise TypeError("{} is not a valid input.".format(type(G))) if layout is None: layout = nx.spring_layout(G, scale = axisSamples - 1, iterations = iters) grid = np.zeros( [axisSamples, axisSamples],dtype=np.float32) for v in layout.values(): x, y = tuple(int(x) for x in v.round(0)) grid[y][x] += 1 elif isinstance(layout, dict): layout = layout.copy() grid = np.zeros([axisSamples, axisSamples],dtype=np.float32) multFactor = (axisSamples - 1) / layoutScaleFactor for k in layout.keys(): tmpPos = layout[k] * multFactor layout[k] = tmpPos x, y = tuple(int(x) for x in tmpPos.round(0)) grid[y][x] += 1 else: raise TypeError("{} is not a valid input.".format(type(layout))) fig = plt.figure() #axis = fig.add_subplot(111) axis = fig.gca(projection='3d') if overlay: nx.draw_networkx(G, pos = layout, ax = axis, node_size = nodeSize, with_labels = False, edgelist = []) grid = ndi.gaussian_filter(grid, (blurringFactor * axisSamples, blurringFactor * axisSamples)) X = Y = np.arange(0, axisSamples, 1) X, Y = np.meshgrid(X, Y) if graphType == "solid": CS = axis.plot_surface(X,Y, grid) else: CS = axis.contourf(X, Y, grid, contours) axis.set_xlabel('X') axis.set_ylabel('Y') axis.set_zlabel('Node Density')
[ "Creates a 3D plot giving the density of nodes on a 2D plane, as a surface in 3D.\n\n Most of the options are for tweaking the final appearance. _layout_ and _layoutScaleFactor_ allow a pre-layout graph to be provided. If a layout is not provided the [networkx.spring_layout()](https://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.layout.spring_layout.html) is used after _iters_ iterations. Then, once the graph has been laid out a grid of _axisSamples_ cells by _axisSamples_ cells is overlaid and the number of nodes in each cell is determined, a gaussian blur is then applied with a sigma of _blurringFactor_. This then forms a surface in 3 dimensions, which is then plotted.\n\n If you find the resultant image looks too banded raise the the _contours_ number to ~50.\n\n # Parameters\n\n _G_ : `networkx Graph`\n\n > The graph to be plotted\n\n _iters_ : `optional [int]`\n\n > Default `50`, the number of iterations for the spring layout if _layout_ is not provided.\n\n _layout_ : `optional [networkx layout dictionary]`\n\n > Default `None`, if provided will be used as a layout of the graph, the maximum distance from the origin along any axis must also given as _layoutScaleFactor_, which is by default `1`.\n\n _layoutScaleFactor_ : `optional [double]`\n\n > Default `1`, The maximum distance from the origin allowed along any axis given by _layout_, i.e. the layout must fit in a square centered at the origin with side lengths 2 * _layoutScaleFactor_\n\n _overlay_ : `optional [bool]`\n\n > Default `False`, if `True` the 2D graph will be plotted on the X-Y plane at Z = 0.\n\n _nodeSize_ : `optional [double]`\n\n > Default `10`, the size of the nodes dawn in the overlay\n\n _axisSamples_ : `optional [int]`\n\n > Default 100, the number of cells used along each axis for sampling. A larger number will mean a lower average density.\n\n _blurringFactor_ : `optional [double]`\n\n > Default `0.1`, the sigma value used for smoothing the surface density. The higher this number the smoother the surface.\n\n _contours_ : `optional [int]`\n\n > Default 15, the number of different heights drawn. If this number is low the resultant image will look very banded. It is recommended this be raised above `50` if you want your images to look good, **Warning** this will make them much slower to generate and interact with.\n\n _graphType_ : `optional [str]`\n\n > Default `'coloured'`, if `'coloured'` the image will have a destiny based colourization applied, the only other option is `'solid'` which removes the colourization.\n\n " ]
Please provide a description of the function:def getMonth(s): monthOrSeason = s.split('-')[0].upper() if monthOrSeason in monthDict: return monthDict[monthOrSeason] else: monthOrSeason = s.split('-')[1].upper() if monthOrSeason.isdigit(): return monthOrSeason else: return monthDict[monthOrSeason] raise ValueError("Month format not recognized: " + s)
[ "\n Known formats:\n Month (\"%b\")\n Month Day (\"%b %d\")\n Month-Month (\"%b-%b\") --- this gets coerced to the first %b, dropping the month range\n Season (\"%s\") --- this gets coerced to use the first month of the given season\n Month Day Year (\"%b %d %Y\")\n Month Year (\"%b %Y\")\n Year Month Day (\"%Y %m %d\")\n " ]
Please provide a description of the function:def makeBiDirectional(d): dTmp = d.copy() for k in d: dTmp[d[k]] = k return dTmp
[ "\n Helper for generating tagNameConverter\n Makes dict that maps from key to value and back\n " ]
Please provide a description of the function:def reverseDict(d): retD = {} for k in d: retD[d[k]] = k return retD
[ "\n Helper for generating fullToTag\n Makes dict of value to key\n " ]
Please provide a description of the function:def proQuestRecordParser(enRecordFile, recNum): tagDict = collections.OrderedDict() currentEntry = 'Name' while True: lineNum, line = next(enRecordFile) if line == '_' * 60 + '\n': break elif line == '\n': pass elif currentEntry is 'Name' or currentEntry is 'url': tagDict[currentEntry] = [line.rstrip()] currentEntry = None elif ':' in line and not line.startswith('http://'): splitLine = line.split(': ') currentEntry = splitLine[0] tagDict[currentEntry] = [': '.join(splitLine[1:]).rstrip()] if currentEntry == 'Author': currentEntry = 'url' else: tagDict[currentEntry].append(line.rstrip()) return tagDict
[ "The parser [ProQuestRecords](../classes/ProQuestRecord.html#metaknowledge.proquest.ProQuestRecord) use. This takes an entry from [proQuestParser()](#metaknowledge.proquest.proQuestHandlers.proQuestParser) and parses it a part of the creation of a `ProQuestRecord`.\n\n # Parameters\n\n _enRecordFile_ : `enumerate object`\n\n > a file wrapped by `enumerate()`\n\n _recNum_ : `int`\n\n > The number given to the entry in the first section of the ProQuest file\n\n # Returns\n\n `collections.OrderedDict`\n\n > An ordered dictionary of the key-vaue pairs in the entry\n " ]
Please provide a description of the function:def addToNetwork(grph, nds, count, weighted, nodeType, nodeInfo, fullInfo, coreCitesDict, coreValues, detailedValues, addCR, recordToCite = True, headNd = None): if headNd is not None: hID = makeID(headNd, nodeType) if nodeType == 'full' or nodeType == 'original': hYear = getattr(headNd, "year") if hID not in grph: nodeName, nodeDat = makeNodeTuple(headNd, hID, nodeInfo, fullInfo, nodeType, count, coreCitesDict, coreValues, detailedValues, addCR) grph.add_node(nodeName, **nodeDat) else: hID = None idList = [] yearList = [] for n in nds: nID = makeID(n, nodeType) if nodeType == 'full' or nodeType == 'original': try: nYear = getattr(n, "year") except: nYear = None yearList.append(nYear) if nID not in grph: nodeName, nodeDat = makeNodeTuple(n, nID, nodeInfo, fullInfo, nodeType, count, coreCitesDict, coreValues, detailedValues, addCR) grph.add_node(nodeName, **nodeDat) elif count: grph.node[nID]['count'] += 1 idList.append(nID) addedEdges = [] if hID: for i in range(len(idList)): nID = idList[i] if nodeType == 'full' or nodeType == 'original': nYear = yearList[i] try: yearDiff = abs(hYear - nYear) except: yearDiff = None if weighted: try: if recordToCite: grph[hID][nID]['weight'] += 1 else: grph[nID][hID]['weight'] += 1 except KeyError: if recordToCite: grph.add_edge(hID, nID, weight=1, yearDiff=yearDiff) else: grph.add_edge(nID, hID, weight=1, yearDiff=yearDiff) elif nID not in grph[hID]: addedEdges.append((hID, nID)) elif weighted: try: if recordToCite: grph[hID][nID]['weight'] += 1 else: grph[nID][hID]['weight'] += 1 except KeyError: if recordToCite: grph.add_edge(hID, nID, weight=1) else: grph.add_edge(hID, nID, weight=1) elif nID not in grph[hID]: addedEdges.append((hID, nID, {yearDiff: yearDiff})) elif len(idList) > 1: for i, outerID in enumerate(idList): for innerID in idList[i + 1:]: if weighted: try: grph[outerID][innerID]['weight'] += 1 except KeyError: grph.add_edge(outerID, innerID, weight = 1) elif innerID not in grph[outerID]: addedEdges.append((outerID, innerID)) grph.add_edges_from(addedEdges)
[ "Addeds the citations _nds_ to _grph_, according to the rules give by _nodeType_, _fullInfo_, etc.\n\n _headNd_ is the citation of the Record\n " ]
Please provide a description of the function:def makeNodeTuple(citation, idVal, nodeInfo, fullInfo, nodeType, count, coreCitesDict, coreValues, detailedValues, addCR): d = {} if nodeInfo: if nodeType == 'full': if coreValues: if citation in coreCitesDict: R = coreCitesDict[citation] d['MK-ID'] = R.id if not detailedValues: infoVals = [] for tag in coreValues: tagVal = R.get(tag) if isinstance(tagVal, str): infoVals.append(tagVal.replace(',','')) elif isinstance(tagVal, list): infoVals.append(tagVal[0].replace(',','')) else: pass d['info'] = ', '.join(infoVals) else: for tag in coreValues: v = R.get(tag, None) if isinstance(v, list): d[tag] = '|'.join(sorted(v)) else: d[tag] = v d['inCore'] = True if addCR: d['citations'] = '|'.join((str(c) for c in R.get('citations', []))) else: d['MK-ID'] = 'None' d['info'] = citation.allButDOI() d['inCore'] = False if addCR: d['citations'] = '' else: d['info'] = citation.allButDOI() elif nodeType == 'journal': if citation.isJournal(): d['info'] = str(citation.FullJournalName()) else: d['info'] = "None" elif nodeType == 'original': d['info'] = str(citation) else: d['info'] = idVal if fullInfo: d['fullCite'] = str(citation) if count: d['count'] = 1 return (idVal, d)
[ "Makes a tuple of idVal and a dict of the selected attributes" ]
Please provide a description of the function:def expandRecs(G, RecCollect, nodeType, weighted): for Rec in RecCollect: fullCiteList = [makeID(c, nodeType) for c in Rec.createCitation(multiCite = True)] if len(fullCiteList) > 1: for i, citeID1 in enumerate(fullCiteList): if citeID1 in G: for citeID2 in fullCiteList[i + 1:]: if citeID2 not in G: G.add_node(citeID2, **G.node[citeID1]) if weighted: G.add_edge(citeID1, citeID2, weight = 1) else: G.add_edge(citeID1, citeID2) elif weighted: try: G.edges[citeID1, citeID2]['weight'] += 1 except KeyError: G.add_edge(citeID1, citeID2, weight = 1) for e1, e2, data in G.edges(citeID1, data = True): G.add_edge(citeID2, e2, **data)
[ "Expand all the citations from _RecCollect_" ]
Please provide a description of the function:def dropNonJournals(self, ptVal = 'J', dropBad = True, invert = False): if dropBad: self.dropBadEntries() if invert: self._collection = {r for r in self._collection if r['pubType'] != ptVal.upper()} else: self._collection = {r for r in self._collection if r['pubType'] == ptVal.upper()}
[ "Drops the non journal type `Records` from the collection, this is done by checking _ptVal_ against the PT tag\n\n # Parameters\n\n _ptVal_ : `optional [str]`\n\n > Default `'J'`, The value of the PT tag to be kept, default is `'J'` the journal tag, other tags can be substituted.\n\n _dropBad_ : `optional [bool]`\n\n > Default `True`, if `True` bad `Records` will be dropped as well those that are not journal entries\n\n _invert_ : `optional [bool]`\n\n > Default `False`, Set `True` to drop journals (or the PT tag given by _ptVal_) instead of keeping them. **Note**, it still drops bad Records if _dropBad_ is `True`\n " ]
Please provide a description of the function:def writeFile(self, fname = None): if len(self._collectedTypes) < 2: recEncoding = self.peek().encoding() else: recEncoding = 'utf-8' if fname: f = open(fname, mode = 'w', encoding = recEncoding) else: f = open(self.name[:200] + '.txt', mode = 'w', encoding = recEncoding) if self._collectedTypes == {'WOSRecord'}: f.write("\ufeffFN Thomson Reuters Web of Science\u2122\n") f.write("VR 1.0\n") elif self._collectedTypes == {'MedlineRecord'}: f.write('\n') elif self._collectedTypes == {'ScopusRecord'}: f.write("\ufeff{}\n".format(','.join(scopusHeader))) for R in self._collection: R.writeRecord(f) f.write('\n') if self._collectedTypes == {'WOSRecord'}: f.write('EF') f.close()
[ "Writes the `RecordCollection` to a file, the written file's format is identical to those download from WOS. The order of `Records` written is random.\n\n # Parameters\n\n _fname_ : `optional [str]`\n\n > Default `None`, if given the output file will written to _fanme_, if `None` the `RecordCollection`'s name's first 200 characters are used with the suffix .isi\n " ]
Please provide a description of the function:def writeCSV(self, fname = None, splitByTag = None, onlyTheseTags = None, numAuthors = True, genderCounts = True, longNames = False, firstTags = None, csvDelimiter = ',', csvQuote = '"', listDelimiter = '|'): if firstTags is None: firstTags = ['id', 'title', 'authorsFull', 'citations', 'keywords', 'DOI'] for i in range(len(firstTags)): if firstTags[i] in fullToTagDict: firstTags[i] = fullToTagDict[firstTags[i]] if onlyTheseTags: for i in range(len(onlyTheseTags)): if onlyTheseTags[i] in fullToTagDict: onlyTheseTags[i] = fullToTagDict[onlyTheseTags[i]] retrievedFields = [t for t in firstTags if t in onlyTheseTags] + [t for t in onlyTheseTags if t not in firstTags] else: retrievedFields = firstTags for R in self: tagsLst = [t for t in R.keys() if t not in retrievedFields] retrievedFields += tagsLst if longNames: try: retrievedFields = [tagToFullDict[t] for t in retrievedFields] except KeyError: raise KeyError("One of the tags could not be converted to a long name.") if fname: baseFileName = fname else: baseFileName = "{}.csv".format(self.name[:200]) if numAuthors: csvWriterFields = retrievedFields + ["num-Authors"] else: csvWriterFields = retrievedFields if genderCounts: csvWriterFields += ['num-Male', 'num-Female', 'num-Unknown'] if splitByTag is None: f = open(baseFileName, mode = 'w', encoding = 'utf-8', newline = '') csvWriter = csv.DictWriter(f, csvWriterFields, delimiter = csvDelimiter, quotechar = csvQuote, quoting=csv.QUOTE_ALL) csvWriter.writeheader() else: filesDict = {} for R in self: if splitByTag: try: splitVal = R[splitByTag] except KeyError: continue else: if not isinstance(splitVal, list): splitVal = [str(splitVal)] recDict = {} for t in retrievedFields: value = R.get(t) if isinstance(value, str): recDict[t] = value elif hasattr(value, '__iter__'): recDict[t] = listDelimiter.join([str(v) for v in value]) elif value is None: recDict[t] = '' else: recDict[t] = str(value) if numAuthors: recDict["num-Authors"] = len(R.get('authorsShort', [])) if genderCounts: recDict['num-Male'], recDict['num-Female'], recDict['num-Unknown'] = R.authGenders(_countsTuple = True) if splitByTag: for sTag in splitVal: if sTag in filesDict: filesDict[sTag][1].writerow(recDict) else: fname = "{}-{}".format(sTag[:200], baseFileName) f = open(fname, mode = 'w', encoding = 'utf-8', newline = '') csvWriter = csv.DictWriter(f, csvWriterFields, delimiter = csvDelimiter, quotechar = csvQuote, quoting=csv.QUOTE_ALL) csvWriter.writeheader() csvWriter.writerow(recDict) filesDict[sTag] = (f, csvWriter) else: csvWriter.writerow(recDict) if splitByTag: for f, c in filesDict.values(): f.close() else: f.close()
[ "Writes all the `Records` from the collection into a csv file with each row a record and each column a tag.\n\n # Parameters\n\n _fname_ : `optional [str]`\n\n > Default `None`, the name of the file to write to, if `None` it uses the collections name suffixed by .csv.\n\n _splitByTag_ : `optional [str]`\n\n > Default `None`, if a tag is given the output will be divided into different files according to the value of the tag, with only the records associated with that tag. For example if `'authorsFull'` is given then each file will only have the lines for `Records` that author is named in.\n\n > The file names are the values of the tag followed by a dash then the normale name for the file as given by _fname_, e.g. for the year 2016 the file could be called `'2016-fname.csv'`.\n\n _onlyTheseTags_ : `optional [iterable]`\n\n > Default `None`, if an iterable (list, tuple, etc) only the tags in _onlyTheseTags_ will be used, if not given then all tags in the records are given.\n\n > If you want to use all known tags pass [metaknowledge.knownTagsList](./ExtendedRecord.html#metaknowledge.ExtendedRecord.tagProcessingFunc).\n\n _numAuthors_ : `optional [bool]`\n\n > Default `True`, if `True` adds the number of authors as the column `'numAuthors'`.\n\n _longNames_ : `optional [bool]`\n\n > Default `False`, if `True` will convert the tags to their longer names, otherwise the short 2 character ones will be used.\n\n _firstTags_ : `optional [iterable]`\n\n > Default `None`, if `None` the iterable `['UT', 'PT', 'TI', 'AF', 'CR']` is used. The tags given by the iterable are the first ones in the csv in the order given.\n\n > **Note** if tags are in _firstTags_ but not in _onlyTheseTags_, _onlyTheseTags_ will override _firstTags_\n\n _csvDelimiter_ : `optional [str]`\n\n > Default `','`, the delimiter used for the cells of the csv file.\n\n _csvQuote_ : `optional [str]`\n\n > Default `'\"'`, the quote character used for the csv.\n\n _listDelimiter_ : `optional [str]`\n\n > Default `'|'`, the delimiter used between values of the same cell if the tag for that record has multiple outputs.\n " ]
Please provide a description of the function:def writeBib(self, fname = None, maxStringLength = 1000, wosMode = False, reducedOutput = False, niceIDs = True): if fname: f = open(fname, mode = 'w', encoding = 'utf-8') else: f = open(self.name[:200] + '.bib', mode = 'w', encoding = 'utf-8') f.write("%This file was generated by the metaknowledge Python package.\n%The contents have been automatically generated and are likely to not work with\n%LaTeX without some human intervention. This file is meant for other automatic\n%systems and not to be used directly for making citations\n") #I figure this is worth mentioning, as someone will get annoyed at none of the special characters being escaped and how terrible some of the fields look to humans for R in self: try: f.write('\n\n') f.write(R.bibString(maxLength = maxStringLength, WOSMode = wosMode, restrictedOutput = reducedOutput, niceID = niceIDs)) except BadWOSRecord: pass except AttributeError: raise RecordsNotCompatible("The Record '{}', with ID '{}' does not support writing to bibtext files.".format(R, R.id)) f.close()
[ "Writes a bibTex entry to _fname_ for each `Record` in the collection.\n\n If the Record is of a journal article (PT J) the bibtext type is set to `'article'`, otherwise it is set to `'misc'`. The ID of the entry is the WOS number and all the Record's fields are given as entries with their long names.\n\n **Note** This is not meant to be used directly with LaTeX none of the special characters have been escaped and there are a large number of unnecessary fields provided. _niceID_ and _maxLength_ have been provided to make conversions easier only.\n\n **Note** Record entries that are lists have their values separated with the string `' and '`, as this is the way bibTex understands\n\n # Parameters\n\n _fname_ : `optional [str]`\n\n > Default `None`, The name of the file to be written. If not given one will be derived from the collection and the file will be written to .\n\n _maxStringLength_ : `optional [int]`\n\n > Default 1000, The max length for a continuous string. Most bibTex implementation only allow string to be up to 1000 characters ([source](https://www.cs.arizona.edu/~collberg/Teaching/07.231/BibTeX/bibtex.html)), this splits them up into substrings then uses the native string concatenation (the `'#'` character) to allow for longer strings\n\n _WOSMode_ : `optional [bool]`\n\n > Default `False`, if `True` the data produced will be unprocessed and use double curly braces. This is the style WOS produces bib files in and mostly macthes that.\n\n _restrictedOutput_ : `optional [bool]`\n\n > Default `False`, if `True` the tags output will be limited to: `'AF'`, `'BF'`, `'ED'`, `'TI'`, `'SO'`, `'LA'`, `'NR'`, `'TC'`, `'Z9'`, `'PU'`, `'J9'`, `'PY'`, `'PD'`, `'VL'`, `'IS'`, `'SU'`, `'PG'`, `'DI'`, `'D2'`, and `'UT'`\n\n _niceID_ : `optional [bool]`\n\n > Default `True`, if `True` the IDs used will be derived from the authors, publishing date and title, if `False` it will be the UT tag\n " ]
Please provide a description of the function:def findProbableCopyright(self): retCopyrights = set() for R in self: begin, abS = findCopyright(R.get('abstract', '')) if abS != '': retCopyrights.add(abS) return list(retCopyrights)
[ "Finds the (likely) copyright string from all abstracts in the `RecordCollection`\n\n # Returns\n\n `list[str]`\n\n > A deduplicated list of all the copyright strings\n " ]
Please provide a description of the function:def forBurst(self, tag, outputFile = None, dropList = None, lower = True, removeNumbers = True, removeNonWords = True, removeWhitespace = True, stemmer = None): whiteSpaceRegex = re.compile(r'\s+') if removeNumbers: if removeNonWords: otherString = r"[\W\d]" else: otherString = r"\d" elif removeNonWords: otherString = r"\W" else: otherString = '' def otherRepl(r): if r.group(0) == ' ': return ' ' else: return '' otherDropsRegex = re.compile(otherString) def burstPreper(inString): if dropList is not None: inString = " {} ".format(inString) for dropS in (" {} ".format(s) for s in dropList): if dropS in inString: inString = inString.replace(dropS, ' ') inString = inString[1:-1] if removeWhitespace: inString = re.sub(whiteSpaceRegex, lambda x: ' ', inString, count = 0) if lower: inString = inString.lower() inString = re.sub(otherDropsRegex, otherRepl, inString, count = 0) sTokens = inString.split(' ') if stemmer is not None: retTokens = [] for token in sTokens: if stemmer is not None: token = stemmer(token) retTokens.append(token) else: retTokens = sTokens return retTokens retDict = {'year' : [], 'word' : []} pcount = 0 pmax = len(self) progArgs = (0, "Starting to work on DataFrame for burst analysis") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: for R in self: pcount += 1 PBar.updateVal(pcount/ pmax, "Analyzing: {}".format(R)) try: year = R['year'] except KeyError: continue try: burstVal = R[tag] except KeyError: continue else: if isinstance(burstVal, list): burstVal = ' '.join((str(i) for i in burstVal)) else: burstVal = str(burstVal) for sToken in burstPreper(burstVal): retDict['year'].append(year) retDict['word'].append(sToken) if outputFile is not None: PBar.updateVal(.99, "Writing to file: {}".format(outputFile)) with open(outputFile, 'w', newline = '') as f: writer = csv.DictWriter(f, ['year', 'word']) for row in range(len(retDict['year'])): writer.writerow({k : retDict[k][row] for k in retDict.keys()}) PBar.finish("Done burst analysis DataFrame with {} rows".format(len(retDict['year']))) return retDict
[ "Creates a pandas friendly dictionary with 2 columns one `'year'` and the other `'word'`. Each row is a word that occurred in the field given by _tag_ in a `Record` and the year of the record. Unfortunately getting the month or day with any type of accuracy has proved to be impossible so year is the only option.\n\n # Parameters\n\n _tag_ : `str`\n\n > The tag giving the field for the words to be extracted from.\n\n _outputFile_ : `optional str`\n\n > Default `None`, if a path is given a csv file will be created from the returned dictionary and written to that file\n\n _dropList_ : `optional list[str]`\n\n > Default `None`, if a list of strings is given each field will be checked for substrings, before any other processing, in the field, surrounded by spaces, matching those in _dropList_. The strings will only be dropped if they are surrounded on both sides with spaces (`' '`) so if `dropList = ['a']` then `'a cat'` will become `'cat'`.\n\n _lower_ : `optional bool`\n\n > default `True`, if `True` the output will made lower case\n\n _removeNumbers_ : `optional bool`\n\n > default `True`, if `True` all numbers will be removed\n\n _removeNonWords_ : `optional bool`\n\n > default `True`, if `True` all non-number non-number characters will be removed\n\n _removeWhitespace_ : `optional bool`\n\n > default `True`, if `True` all whitespace will be converted to a single space (`' '`)\n\n _stemmer_ : `optional func`\n\n > default `None`, if a function is provided it will be run on each individual word in the field and the output will replace it. For example to use the `PorterStemmer` in the _nltk_ package you would give `nltk.PorterStemmer().stem`\n " ]
Please provide a description of the function:def forNLP(self, outputFile = None, extraColumns = None, dropList = None, lower = True, removeNumbers = True, removeNonWords = True, removeWhitespace = True, removeCopyright = False, stemmer = None): whiteSpaceRegex = re.compile(r'\s+') if removeNumbers: if removeNonWords: otherString = r"[\W\d]" else: otherString = r"\d" elif removeNonWords: otherString = r"\W" else: otherString = '' def otherRepl(r): if r.group(0) == ' ': return ' ' else: return '' otherDropsRegex = re.compile(otherString) def abPrep(abst): if dropList is not None: #incase a drop string is on the edge abst = " {} ".format(abst.replace('\n', ' ')) for dropS in (" {} ".format(s) for s in dropList): if dropS in abst: abst = abst.replace(dropS, ' ') abst = abst[1:-1] if removeWhitespace: abst = re.sub(whiteSpaceRegex, lambda x: ' ', abst, count = 0) if removeCopyright: abst, copyrightString = findCopyright(abst) else: copyrightString = '' if lower: abst = abst.lower() abst = re.sub(otherDropsRegex, otherRepl, abst, count = 0) if stemmer is not None: sTokens = abst.split(' ') retTokens = [] for token in sTokens: if stemmer is not None: token = stemmer(token) retTokens.append(token) abst = ' '.join(retTokens) return abst, copyrightString pcount = 0 pmax = len(self) progArgs = (0, "Starting to work on DataFrame for NLP") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} retDict = {'id' : [], 'year' : [], 'title' : [], 'keywords' : [], 'abstract' : []} if removeCopyright: retDict['copyright'] = [] if extraColumns is None: extraColumns = [] else: for builtinColumn in ['id', 'year', 'title', 'keywords', 'abstract']: if builtinColumn in extraColumns: extraColumns.remove(builtinColumn) for column in extraColumns: retDict[column] = [] with _ProgressBar(*progArgs, **progKwargs) as PBar: for R in self: pcount += 1 PBar.updateVal(pcount/ pmax, "Analyzing: {}".format(R)) abstract, copyrightString = abPrep(R.get('AB', '')) retDict['id'].append(R.id) retDict['year'].append(R.get('year', '')) retDict['title'].append(R.get('title', '')) retDict['keywords'].append('|'.join(R.get('keywords', []))) retDict['abstract'].append(abstract) if removeCopyright: retDict['copyright'].append(copyrightString) for extraTag in extraColumns: e = R.get(extraTag) if isinstance(e, list): e = '|'.join((str(s) for s in e)) elif e is None: e = '' retDict[extraTag].append(e) if outputFile is not None: PBar.updateVal(.99, "Writing to file: {}".format(outputFile)) with open(outputFile, 'w', newline = '') as f: fieldNames = list(retDict.keys()) fieldNames.remove('id') fieldNames.remove('title') fieldNames.remove('year') fieldNames.remove('keywords') fieldNames = ['id', 'year', 'title', 'keywords'] + fieldNames writer = csv.DictWriter(f, fieldNames) writer.writeheader() for row in range(len(retDict['id'])): writer.writerow({k : retDict[k][row] for k in retDict.keys()}) PBar.finish("Done NLP DataFrame with {} rows".format(len(retDict['id']))) return retDict
[ "Creates a pandas friendly dictionary with each row a `Record` in the `RecordCollection` and the columns fields natural language processing uses (id, title, publication year, keywords and the abstract). The abstract is by default is processed to remove non-word, non-space characters and the case is lowered.\n\n # Parameters\n\n _outputFile_ : `optional str`\n\n > default `None`, if a file path is given a csv of the returned data will be written\n\n _extraColumns_ : `optional list[str]`\n\n > default `None`, if a list of tags is given each of the tag's values for a `Record` will be added to the output(s)\n\n _dropList_ : `optional list[str]`\n\n > default `None`, if a list of strings is provided they will be dropped from the output's abstracts. The matching is case sensitive and done before any other processing. The strings will only be dropped if they are surrounded on both sides with spaces (`' '`) so if `dropList = ['a']` then `'a cat'` will become `'cat'`.\n\n _lower_ : `optional bool`\n\n > default `True`, if `True` the abstract will made to lower case\n\n _removeNumbers_ : `optional bool`\n\n > default `True`, if `True` all numbers will be removed\n\n _removeNonWords_ : `optional bool`\n\n > default `True`, if `True` all non-number non-number characters will be removed\n\n _removeWhitespace_ : `optional bool`\n\n > default `True`, if `True` all whitespace will be converted to a single space (`' '`)\n\n _removeCopyright_ : `optional bool`\n\n > default `False`, if `True` the copyright statement at the end of the abstract will be removed and added to a new column. Note this is heuristic based and will not work for all papers.\n\n _stemmer_ : `optional func`\n\n > default `None`, if a function is provided it will be run on each individual word in the abstract and the output will replace it. For example to use the `PorterStemmer` in the _nltk_ package you would give `nltk.PorterStemmer().stem`\n " ]
Please provide a description of the function:def makeDict(self, onlyTheseTags = None, longNames = False, raw = False, numAuthors = True, genderCounts = True): if onlyTheseTags: for i in range(len(onlyTheseTags)): if onlyTheseTags[i] in fullToTagDict: onlyTheseTags[i] = fullToTagDict[onlyTheseTags[i]] retrievedFields = onlyTheseTags else: retrievedFields = [] for R in self: tagsLst = [t for t in R.keys() if t not in retrievedFields] retrievedFields += tagsLst if longNames: try: retrievedFields = [tagToFullDict[t] for t in retrievedFields] except KeyError: raise KeyError("One of the tags could not be converted to a long name.") retDict = {k : [] for k in retrievedFields} if numAuthors: retDict["num-Authors"] = [] if genderCounts: retDict.update({'num-Male' : [], 'num-Female' : [], 'num-Unknown' : []}) for R in self: if numAuthors: retDict["num-Authors"].append(len(R.get('authorsShort', []))) if genderCounts: m, f, u = R.authGenders(_countsTuple = True) retDict['num-Male'].append(m) retDict['num-Female'].append(f) retDict['num-Unknown'].append(u) for k, v in R.subDict(retrievedFields, raw = raw).items(): retDict[k].append(v) return retDict
[ "Returns a dict with each key a tag and the values being lists of the values for each of the Records in the collection, `None` is given when there is no value and they are in the same order across each tag.\n\n When used with pandas: `pandas.DataFrame(RC.makeDict())` returns a data frame with each column a tag and each row a Record.\n\n # Parameters\n\n _onlyTheseTags_ : `optional [iterable]`\n\n > Default `None`, if an iterable (list, tuple, etc) only the tags in _onlyTheseTags_ will be used, if not given then all tags in the records are given.\n\n > If you want to use all known tags pass [metaknowledge.knownTagsList](./ExtendedRecord.html#metaknowledge.ExtendedRecord.tagProcessingFunc).\n\n _longNames_ : `optional [bool]`\n\n > Default `False`, if `True` will convert the tags to their longer names, otherwise the short 2 character ones will be used.\n\n _cleanedVal_ : `optional [bool]`\n\n > Default `True`, if `True` the processed values for each `Record`'s field will be provided, otherwise the raw values are given.\n\n _numAuthors_ : `optional [bool]`\n\n > Default `True`, if `True` adds the number of authors as the column `'numAuthors'`.\n " ]
Please provide a description of the function:def rpys(self, minYear = None, maxYear = None, dropYears = None, rankEmptyYears = False): def deviation(targetYear, targetValue, targetDict): yearCounts = [targetValue] for deltaY in [-2, -1, 1, 2]: try: yearCounts.append(targetDict[targetYear + deltaY]) except KeyError: yearCounts.append(0) medianCount = list(sorted(yearCounts))[2] absDiff = targetValue - medianCount return absDiff if dropYears is None: dropYears = set() yearCounts = {} retDict = {'year' : [], 'count' : [], 'abs-deviation' : [], 'rank' : []} for R in self: try: cites = R['citations'] except KeyError: continue recYear = R.get('year', float('inf')) for cite in cites: try: #year can be None cYear = int(cite.year) except (AttributeError, TypeError): continue else: #need the extra years for the normlization if (maxYear is not None and cYear > (maxYear + 2)) or (minYear is not None and cYear < (minYear - 2)): continue #years from before the paper are an error elif recYear < (cYear + 2): continue if cYear in yearCounts: yearCounts[cYear] += 1 else: yearCounts[cYear] = 1 if minYear is None: smallest = min(yearCounts.keys()) if smallest > 1000: minYear = smallest if maxYear is None: biggest = max(yearCounts.keys()) if biggest < 2100: maxYear = biggest targetYears = set(( i for i in range(minYear, maxYear + 1) if i not in dropYears)) ranks = {} yearDeviances = {} for y in targetYears: try: c = yearCounts[y] except KeyError: c = 0 yearDeviances[y] = deviation(y, c, yearCounts) for rank, year in enumerate(sorted(yearDeviances.items(), key = lambda x: x[1], reverse = False), start = 1): ranks[year[0]] = rank for y in targetYears: try: c = yearCounts[y] except KeyError: c = 0 if c == 0 and not rankEmptyYears: retDict['rank'].append(0) else: retDict['rank'].append(ranks[y]) retDict['abs-deviation'].append(yearDeviances[y]) retDict['year'].append(y) retDict['count'].append(c) return retDict
[ "This implements _Referenced Publication Years Spectroscopy_ a techinique for finding import years in citation data. The authors of the original papers have a website with more information, found [here](http://www.leydesdorff.net/software/rpys/).\n\n This function computes the spectra of the `RecordCollection` and returns a dictionary mapping strings to lists of `ints`. Each list is ordered and the values of each with the same index form a row and each list a column. The strings are the names of the columns. This is intended to be read directly by pandas `DataFrames`.\n\n The columns returned are:\n\n 1. `'year'`, the years of the counted citations, missing years are inserted with a count of 0, unless they are outside the bounds of the highest year or the lowest year and the default value is used. e.g. if the highest year is 2016, 2017 will not be inserted unless _maxYear_ has been set to 2017 or higher\n 2. `'count'`, the number of times the year was cited\n 3. `'abs-deviation'`, deviation from the 5-year median. Calculated by taking the absolute deviation of the count from the median of it and the next 2 years and the preceding 2 years\n 4. `'rank'`, the rank of the year, the highest ranked year being the one with the highest deviation, the second highest being the second highest deviation and so on. All years with 0 count are given the rank 0 by default\n\n # Parameters\n\n _minYear_ : `optional int`\n\n > Default `1000`, The lowest year to be returned, note years outside this bound will be used to calculate the deviation from the 5-year median\n\n _maxYear_ : `optional int`\n\n > Default `2100`, The highest year to be returned, note years outside this bound will be used to calculate the deviation from the 5-year median\n\n _dropYears_ : `optional int or list[int]`\n\n > Default `None`, year or collection of years that will be removed from the returned value, note the dropped years will still be used to calculate the deviation from the 5-year\n\n _rankEmptyYears_ : `optional [bool]`\n\n > Default `False`, if `True` years with 0 count will be ranked according to their deviance, if many 0 count years exist their ordering is not guaranteed to be stable\n\n # Returns\n\n `dict[str:list]`\n\n > The table of values from the _Referenced Publication Years Spectroscopy_\n " ]
Please provide a description of the function:def genderStats(self, asFractions = False): maleCount = 0 femaleCount = 0 unknownCount = 0 for R in self: m, f, u = R.authGenders(_countsTuple = True) maleCount += m femaleCount += f unknownCount += u if asFractions: tot = maleCount + femaleCount + unknownCount return {'Male' : maleCount / tot, 'Female' : femaleCount / tot, 'Unknown' : unknownCount / tot} return {'Male' : maleCount, 'Female' : femaleCount, 'Unknown' : unknownCount}
[ "Creates a dict (`{'Male' : maleCount, 'Female' : femaleCount, 'Unknown' : unknownCount}`) with the numbers of male, female and unknown names in the collection.\n\n # Parameters\n\n _asFractions_ : `optional bool`\n\n > Default `False`, if `True` the counts will be divided by the total number of names, giving the fraction of names in each category instead of the raw counts.\n\n # Returns\n\n `dict[str:int]`\n\n > A dict with three keys `'Male'`, `'Female'` and `'Unknown'` mapping to their respective counts\n " ]
Please provide a description of the function:def getCitations(self, field = None, values = None, pandasFriendly = True, counts = True): retCites = [] if values is not None: if isinstance(values, (str, int, float)) or not isinstance(values, collections.abc.Container): values = [values] for R in self: retCites += R.getCitations(field = field, values = values, pandasFriendly = False) if pandasFriendly: return _pandasPrep(retCites, counts) else: return list(set(retCites))
[ "Creates a pandas ready dict with each row a different citation the contained Records and columns containing the original string, year, journal, author's name and the number of times it occured.\n\n There are also options to filter the output citations with _field_ and _values_\n\n # Parameters\n\n _field_ : `optional str`\n\n > Default `None`, if given all citations missing the named field will be dropped.\n\n _values_ : `optional str or list[str]`\n\n > Default `None`, if _field_ is also given only those citations with one of the strings given in _values_ will be included.\n\n > e.g. to get only citations from 1990 or 1991: `field = year, values = [1991, 1990]`\n\n _pandasFriendly_ : `optional bool`\n\n > Default `True`, if `False` a list of the citations will be returned instead of the more complicated pandas dict\n\n _counts_ : `optional bool`\n\n > Default `True`, if `False` the counts columns will be removed\n\n # Returns\n\n `dict`\n\n > A pandas ready dict with all the Citations\n " ]
Please provide a description of the function:def networkCoAuthor(self, detailedInfo = False, weighted = True, dropNonJournals = False, count = True, useShortNames = False, citeProfile = False): grph = nx.Graph() pcount = 0 progArgs = (0, "Starting to make a co-authorship network") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} if bool(detailedInfo): try: infoVals = [] for tag in detailedInfo: infoVals.append(normalizeToTag(tag)) except TypeError: infoVals = ['year', 'title', 'journal', 'volume', 'beginningPage'] def attributeMaker(Rec): attribsDict = {} for val in infoVals: recVal = Rec.get(val) if isinstance(recVal, list): attribsDict[val] = ', '.join((str(v).replace(',', '') for v in recVal)) else: attribsDict[val] = str(recVal).replace(',', '') if count: attribsDict['count'] = 1 if citeProfile: attribsDict['citeProfile'] = {} return attribsDict else: if count: if citeProfile: attributeMaker = lambda x: {'count' : 1, 'citeProfile' : {}} else: attributeMaker = lambda x: {'count' : 1} else: if citeProfile: attributeMaker = lambda x: {'citeProfile' : {}} else: attributeMaker = lambda x: {} with _ProgressBar(*progArgs, **progKwargs) as PBar: for R in self: if PBar: pcount += 1 PBar.updateVal(pcount/ len(self), "Analyzing: " + str(R)) if dropNonJournals and not R.createCitation().isJournal(): continue if useShortNames: authsList = R.get('authorsShort', []) else: authsList = R.get('authorsFull', []) if authsList: authsList = list(authsList) detailedInfo = attributeMaker(R) if citeProfile: citesLst = R.get('citations', []) for i, auth1 in enumerate(authsList): if auth1 not in grph: grph.add_node(auth1, **detailedInfo.copy()) elif count: grph.node[auth1]['count'] += 1 if citeProfile: for c in citesLst: try: grph.node[auth1]['citeProfile'][c] += 1 except KeyError: grph.node[auth1]['citeProfile'][c] = 1 for auth2 in authsList[i + 1:]: if auth2 not in grph: grph.add_node(auth2, **detailedInfo.copy()) elif count: grph.node[auth2]['count'] += 1 if citeProfile: for c in citesLst: try: grph.node[auth2]['citeProfile'][c] += 1 except KeyError: grph.node[auth2]['citeProfile'][c] = 1 if grph.has_edge(auth1, auth2) and weighted: grph.edges[auth1, auth2]['weight'] += 1 elif weighted: grph.add_edge(auth1, auth2, weight = 1) else: grph.add_edge(auth1, auth2) if citeProfile: if PBar: PBar.updateVal(.99, "Extracting citation profiles") previous = {} for n, dat in grph.nodes(data = True): previous[n] = dat #zip(*l) undoes zip(l1, l2) try: cites, counts = zip(*dat['citeProfile'].items()) except ValueError: cites, counts = [], [] dat['citeProfileCites'] = '|'.join((str(c) for c in cites)) dat['citeProfileCounts'] = '|'.join((str(c) for c in counts)) del dat['citeProfile'] if PBar: PBar.finish("Done making a co-authorship network from {}".format(self)) return grph
[ "Creates a coauthorship network for the RecordCollection.\n\n # Parameters\n\n _detailedInfo_ : `optional [bool or iterable[WOS tag Strings]]`\n\n > Default `False`, if `True` all nodes will be given info strings composed of information from the Record objects themselves. This is Equivalent to passing the list: `['PY', 'TI', 'SO', 'VL', 'BP']`.\n\n > If _detailedInfo_ is an iterable (that evaluates to `True`) of WOS Tags (or long names) The values of those tags will be used to make the info attributes.\n\n > For each of the selected tags an attribute will be added to the node using the values of those tags on the first `Record` encountered. **Warning** iterating over `RecordCollection` objects is not deterministic the first `Record` will not always be same between runs. The node will be given attributes with the names of the WOS tags for each of the selected tags. The attributes will contain strings of containing the values (with commas removed), if multiple values are encountered they will be comma separated.\n\n > Note: _detailedInfo_ is not identical to the _detailedCore_ argument of [Recordcollection.networkCoCitation()](#metaknowledge.RecordCollection.networkCoCitation) or [Recordcollection.networkCitation()](#metaknowledge.RecordCollection.networkCitation)\n\n _weighted_ : `optional [bool]`\n\n > Default `True`, whether the edges are weighted. If `True` the edges are weighted by the number of co-authorships.\n\n _dropNonJournals_ : `optional [bool]`\n\n > Default `False`, whether to drop authors from non-journals\n\n _count_ : `optional [bool]`\n\n > Default `True`, causes the number of occurrences of a node to be counted\n\n # Returns\n\n `Networkx Graph`\n\n > A networkx graph with author names as nodes and collaborations as edges.\n " ]
Please provide a description of the function:def networkCoCitation(self, dropAnon = True, nodeType = "full", nodeInfo = True, fullInfo = False, weighted = True, dropNonJournals = False, count = True, keyWords = None, detailedCore = True, detailedCoreAttributes = False, coreOnly = False, expandedCore = False, addCR = False): allowedTypes = ["full", "original", "author", "journal", "year"] if nodeType not in allowedTypes: raise RCValueError("{} is not an allowed nodeType.".format(nodeType)) coreValues = [] if bool(detailedCore): try: for tag in detailedCore: coreValues.append(normalizeToTag(tag)) except TypeError: coreValues = ['id', 'authorsFull', 'year', 'title', 'journal', 'volume', 'beginningPage'] tmpgrph = nx.Graph() pcount = 0 progArgs = (0, "Starting to make a co-citation network") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if coreOnly or coreValues or expandedCore: coreCitesDict = {R.createCitation() : R for R in self} if coreOnly: coreCites = coreCitesDict.keys() else: coreCites = None else: coreCitesDict = None coreCites = None for R in self: if PBar: pcount += 1 PBar.updateVal(pcount / len(self), "Analyzing: {}".format(R)) Cites = R.get('citations') if Cites: filteredCites = filterCites(Cites, nodeType, dropAnon, dropNonJournals, keyWords, coreCites) addToNetwork(tmpgrph, filteredCites, count, weighted, nodeType, nodeInfo , fullInfo, coreCitesDict, coreValues, detailedCoreAttributes, addCR, headNd = None) if expandedCore: if PBar: PBar.updateVal(.98, "Expanding core Records") expandRecs(tmpgrph, self, nodeType, weighted) if PBar: PBar.finish("Done making a co-citation network from {}".format(self)) return tmpgrph
[ "Creates a co-citation network for the RecordCollection.\n\n # Parameters\n\n _nodeType_ : `optional [str]`\n\n > One of `\"full\"`, `\"original\"`, `\"author\"`, `\"journal\"` or `\"year\"`. Specifies the value of the nodes in the graph. The default `\"full\"` causes the citations to be compared holistically using the [metaknowledge.Citation](./Citation.html#metaknowledge.citation.Citation) builtin comparison operators. `\"original\"` uses the raw original strings of the citations. While `\"author\"`, `\"journal\"` and `\"year\"` each use the author, journal and year respectively.\n\n _dropAnon_ : `optional [bool]`\n\n > default `True`, if `True` citations labeled anonymous are removed from the network\n\n _nodeInfo_ : `optional [bool]`\n\n > default `True`, if `True` an extra piece of information is stored with each node. The extra inforamtion is detemined by _nodeType_.\n\n _fullInfo_ : `optional [bool]`\n\n > default `False`, if `True` the original citation string is added to the node as an extra value, the attribute is labeled as fullCite\n\n _weighted_ : `optional [bool]`\n\n > default `True`, wether the edges are weighted. If `True` the edges are weighted by the number of citations.\n\n _dropNonJournals_ : `optional [bool]`\n\n > default `False`, wether to drop citations of non-journals\n\n _count_ : `optional [bool]`\n\n > default `True`, causes the number of occurrences of a node to be counted\n\n _keyWords_ : `optional [str] or [list[str]]`\n\n > A string or list of strings that the citations are checked against, if they contain any of the strings they are removed from the network\n\n _detailedCore_ : `optional [bool or iterable[WOS tag Strings]]`\n\n > default `True`, if `True` all Citations from the core (those of records in the RecordCollection) and the _nodeType_ is `'full'` all nodes from the core will be given info strings composed of information from the Record objects themselves. This is Equivalent to passing the list: `['AF', 'PY', 'TI', 'SO', 'VL', 'BP']`.\n\n > If _detailedCore_ is an iterable (That evaluates to `True`) of WOS Tags (or long names) The values of those tags will be used to make the info attribute. All\n\n > The resultant string is the values of each tag, with commas removed, seperated by `', '`, just like the info given by non-core Citations. Note that for tags like `'AF'` that return lists only the first entry in the list will be used. Also a second attribute is created for all nodes called inCore wich is a boolean describing if the node is in the core or not.\n\n > Note: _detailedCore_ is not identical to the _detailedInfo_ argument of [Recordcollection.networkCoAuthor()](#metaknowledge.RecordCollection.networkCoAuthor)\n\n _coreOnly_ : `optional [bool]`\n\n > default `False`, if `True` only Citations from the RecordCollection will be included in the network\n\n _expandedCore_ : `optional [bool]`\n\n > default `False`, if `True` all citations in the ouput graph that are records in the collection will be duplicated for each author. If the nodes are `\"full\"`, `\"original\"` or `\"author\"` this will result in new noded being created for the other options the results are **not** defined or tested. Edges will be created between each of the nodes for each record expanded, attributes will be copied from exiting nodes.\n\n # Returns\n\n `Networkx Graph`\n\n > A networkx graph with hashes as ID and co-citation as edges\n " ]
Please provide a description of the function:def networkBibCoupling(self, weighted = True, fullInfo = False, addCR = False): progArgs = (0, "Make a citation network for coupling") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: citeGrph = self.networkCitation(weighted = False, directed = True, detailedCore = True, fullInfo = fullInfo, count = False, nodeInfo = True, addCR = addCR, _quiet = True) pcount = 0 pmax = len(citeGrph) PBar.updateVal(.2, "Starting to classify nodes") workingGrph = nx.Graph() couplingSet = set() for n, d in citeGrph.nodes(data = True): pcount += 1 PBar.updateVal(.2 + .4 * (pcount / pmax), "Classifying: {}".format(n)) if d['inCore']: workingGrph.add_node(n, **d) if citeGrph.in_degree(n) > 0: couplingSet.add(n) pcount = 0 pmax = len(couplingSet) for n in couplingSet: PBar.updateVal(.6 + .4 * (pcount / pmax), "Coupling: {}".format(n)) citesLst = list(citeGrph.in_edges(n)) for i, edgeOuter in enumerate(citesLst): outerNode = edgeOuter[0] for edgeInner in citesLst[i + 1:]: innerNode = edgeInner[0] if weighted and workingGrph.has_edge(outerNode, innerNode): workingGrph.edges[outerNode, innerNode]['weight'] += 1 elif weighted: workingGrph.add_edge(outerNode, innerNode, weight = 1) else: workingGrph.add_edge(outerNode, innerNode) PBar.finish("Done making a bib-coupling network from {}".format(self)) return workingGrph
[ "Creates a bibliographic coupling network based on citations for the RecordCollection.\n\n # Parameters\n\n _weighted_ : `optional bool`\n\n > Default `True`, if `True` the weight of the edges will be added to the network\n\n _fullInfo_ : `optional bool`\n\n > Default `False`, if `True` the full citation string will be added to each of the nodes of the network.\n\n # Returns\n\n `Networkx Graph`\n\n > A graph of the bibliographic coupling\n " ]
Please provide a description of the function:def yearSplit(self, startYear, endYear, dropMissingYears = True): recordsInRange = set() for R in self: try: if R.get('year') >= startYear and R.get('year') <= endYear: recordsInRange.add(R) except TypeError: if dropMissingYears: pass else: raise RCret = RecordCollection(recordsInRange, name = "{}({}-{})".format(self.name, startYear, endYear), quietStart = True) RCret._collectedTypes = self._collectedTypes.copy() return RCret
[ "Creates a RecordCollection of Records from the years between _startYear_ and _endYear_ inclusive.\n\n # Parameters\n\n _startYear_ : `int`\n\n > The smallest year to be included in the returned RecordCollection\n\n _endYear_ : `int`\n\n > The largest year to be included in the returned RecordCollection\n\n _dropMissingYears_ : `optional [bool]`\n\n > Default `True`, if `True` Records with missing years will be dropped. If `False` a `TypeError` exception will be raised\n\n # Returns\n\n `RecordCollection`\n\n > A RecordCollection of Records from _startYear_ to _endYear_\n " ]
Please provide a description of the function:def localCiteStats(self, pandasFriendly = False, keyType = "citation"): count = 0 recCount = len(self) progArgs = (0, "Starting to get the local stats on {}s.".format(keyType)) if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: keyTypesLst = ["citation", "journal", "year", "author"] citesDict = {} if keyType not in keyTypesLst: raise TypeError("{} is not a valid key type, only '{}' or '{}' are.".format(keyType, "', '".join(keyTypesLst[:-1]), keyTypesLst[-1])) for R in self: rCites = R.get('citations') if PBar: count += 1 PBar.updateVal(count / recCount, "Analysing: {}".format(R.UT)) if rCites: for c in rCites: if keyType == keyTypesLst[0]: cVal = c else: cVal = getattr(c, keyType) if cVal is None: continue if cVal in citesDict: citesDict[cVal] += 1 else: citesDict[cVal] = 1 if PBar: PBar.finish("Done, {} {} fields analysed".format(len(citesDict), keyType)) if pandasFriendly: citeLst = [] countLst = [] for cite, occ in citesDict.items(): citeLst.append(cite) countLst.append(occ) return {"Citations" : citeLst, "Counts" : countLst} else: return citesDict
[ "Returns a dict with all the citations in the CR field as keys and the number of times they occur as the values\n\n # Parameters\n\n _pandasFriendly_ : `optional [bool]`\n\n > default `False`, makes the output be a dict with two keys one `'Citations'` is the citations the other is their occurrence counts as `'Counts'`.\n\n _keyType_ : `optional [str]`\n\n > default `'citation'`, the type of key to use for the dictionary, the valid strings are `'citation'`, `'journal'`, `'year'` or `'author'`. IF changed from `'citation'` all citations matching the requested option will be contracted and their counts added together.\n\n # Returns\n\n `dict[str, int or Citation : int]`\n\n > A dictionary with keys as given by _keyType_ and integers giving their rates of occurrence in the collection\n " ]
Please provide a description of the function:def localCitesOf(self, rec): localCites = [] if isinstance(rec, Record): recCite = rec.createCitation() if isinstance(rec, str): try: recCite = self.getID(rec) except ValueError: try: recCite = Citation(rec) except AttributeError: raise ValueError("{} is not a valid WOS string or a valid citation string".format(recCite)) else: if recCite is None: return RecordCollection(inCollection = localCites, name = "Records_citing_{}".format(rec), quietStart = True) else: recCite = recCite.createCitation() elif isinstance(rec, Citation): recCite = rec else: raise ValueError("{} is not a valid input, rec must be a Record, string or Citation object.".format(rec)) for R in self: rCites = R.get('citations') if rCites: for cite in rCites: if recCite == cite: localCites.append(R) break return RecordCollection(inCollection = localCites, name = "Records_citing_'{}'".format(rec), quietStart = True)
[ "Takes in a Record, WOS string, citation string or Citation and returns a RecordCollection of all records that cite it.\n\n # Parameters\n\n _rec_ : `Record, str or Citation`\n\n > The object that is being cited\n\n # Returns\n\n `RecordCollection`\n\n > A `RecordCollection` containing only those `Records` that cite _rec_\n " ]
Please provide a description of the function:def citeFilter(self, keyString = '', field = 'all', reverse = False, caseSensitive = False): retRecs = [] keyString = str(keyString) for R in self: try: if field == 'all': for cite in R.get('citations'): if caseSensitive: if keyString in cite.original: retRecs.append(R) break else: if keyString.upper() in cite.original.upper(): retRecs.append(R) break elif field == 'author': for cite in R.get('citations'): try: if keyString.upper() in cite.author.upper(): retRecs.append(R) break except AttributeError: pass elif field == 'journal': for cite in R.get('citations'): try: if keyString.upper() in cite.journal: retRecs.append(R) break except AttributeError: pass elif field == 'year': for cite in R.get('citations'): try: if int(keyString) == cite.year: retRecs.append(R) break except AttributeError: pass elif field == 'V': for cite in R.get('citations'): try: if keyString.upper() in cite.V: retRecs.append(R) break except AttributeError: pass elif field == 'P': for cite in R.get('citations'): try: if keyString.upper() in cite.P: retRecs.append(R) break except AttributeError: pass elif field == 'misc': for cite in R.get('citations'): try: if keyString.upper() in cite.misc: retRecs.append(R) break except AttributeError: pass elif field == 'anonymous': for cite in R.get('citations'): if cite.isAnonymous(): retRecs.append(R) break elif field == 'bad': for cite in R.get('citations'): if cite.bad: retRecs.append(R) break except TypeError: pass if reverse: excluded = [] for R in self: if R not in retRecs: excluded.append(R) return RecordCollection(inCollection = excluded, name = self.name, quietStart = True) else: return RecordCollection(inCollection = retRecs, name = self.name, quietStart = True)
[ "Filters `Records` by some string, _keyString_, in their citations and returns all `Records` with at least one citation possessing _keyString_ in the field given by _field_.\n\n # Parameters\n\n _keyString_ : `optional [str]`\n\n > Default `''`, gives the string to be searched for, if it is is blank then all citations with the specified field will be matched\n\n _field_ : `optional [str]`\n\n > Default `'all'`, gives the component of the citation to be looked at, it can be one of a few strings. The default is `'all'` which will cause the entire original `Citation` to be searched. It can be used to search across fields, e.g. `'1970, V2'` is a valid keystring\n The other options are:\n\n + `'author'`, searches the author field\n + `'year'`, searches the year field\n + `'journal'`, searches the journal field\n + `'V'`, searches the volume field\n + `'P'`, searches the page field\n + `'misc'`, searches all the remaining uncategorized information\n + `'anonymous'`, searches for anonymous `Citations`, _keyString_ is not ignored\n + `'bad'`, searches for bad citations, keyString is not used\n\n _reverse_ : `optional [bool]`\n\n > Default `False`, being set to `True` causes all `Records` not matching the query to be returned\n\n _caseSensitive_ : `optional [bool]`\n\n > Default `False`, if `True` causes the search across the original to be case sensitive, **only** the `'all'` option can be case sensitive\n " ]
Please provide a description of the function:def filterNonJournals(citesLst, invert = False): retCites = [] for c in citesLst: if c.isJournal(): if not invert: retCites.append(c) elif invert: retCites.append(c) return retCites
[ "Removes the `Citations` from _citesLst_ that are not journals\n\n # Parameters\n\n _citesLst_ : `list [Citation]`\n\n > A list of citations to be filtered\n\n _invert_ : `optional [bool]`\n\n > Default `False`, if `True` non-journals will be kept instead of journals\n\n # Returns\n\n `list [Citation]`\n\n > A filtered list of Citations from _citesLst_\n " ]
Please provide a description of the function:def allButDOI(self): extraTags = ['extraAuthors', 'V', 'issue', 'P', 'misc'] s = self.ID() extras = [] for tag in extraTags: if getattr(self, tag, False): extras.append(str(getattr(self, tag))) if len(extras) > 0: return "{0}, {1}".format(s, ', '.join(extras)) else: return s
[ "\n Returns a string of the normalized values from the Citation excluding the DOI number. Equivalent to getting the ID with [ID()](#metaknowledge.citation.Citation.ID) then appending the extra values from [Extra()](#metaknowledge.citation.Citation.Extra) and then removing the substring containing the DOI number.\n\n # Returns\n\n `str`\n\n > A string containing the data of the Citation.\n " ]
Please provide a description of the function:def Extra(self): extraTags = ['V', 'P', 'DOI', 'misc'] retVal = "" for tag in extraTags: if getattr(self, tag): retVal += getattr(self, tag) + ', ' if len(retVal) > 2: return retVal[:-2] else: return retVal
[ "\n Returns any `V`, `P`, `DOI` or `misc` values as a string. These are all the values not returned by [ID()](#metaknowledge.citation.Citation.ID), they are separated by `' ,'`.\n\n # Returns\n\n `str`\n\n > A string containing the data not in the ID of the `Citation`.\n " ]
Please provide a description of the function:def isJournal(self, dbname = abrevDBname, manualDB = manualDBname, returnDict ='both', checkIfExcluded = False): global abbrevDict if abbrevDict is None: abbrevDict = getj9dict(dbname = dbname, manualDB = manualDB, returnDict = returnDict) if not hasattr(self, 'journal'): return False elif checkIfExcluded and self.journal: try: if abbrevDict.get(self.journal, [True])[0]: return False else: return True except IndexError: return False else: if self.journal: dictVal = abbrevDict.get(self.journal, [b''])[0] if dictVal: return dictVal else: return False else: return False
[ "Returns `True` if the `Citation`'s `journal` field is a journal abbreviation from the WOS listing found at [http://images.webofknowledge.com/WOK46/help/WOS/A_abrvjt.html](http://images.webofknowledge.com/WOK46/help/WOS/A_abrvjt.html), i.e. checks if the citation is citing a journal.\n\n **Note**: Requires the [j9Abbreviations](../modules/journalAbbreviations.html#metaknowledge.journalAbbreviations.backend.getj9dict) database file and will raise an error if it cannot be found.\n\n **Note**: All parameters are used for getting the data base with [getj9dict](../modules/journalAbbreviations.html#metaknowledge.journalAbbreviations.backend.getj9dict).\n\n # Parameters\n\n _dbname_ : `optional [str]`\n\n > The name of the downloaded database file, the default is determined at run time. It is recommended that this remain untouched.\n\n _manualDB_ : `optional [str]`\n\n > The name of the manually created database file, the default is determined at run time. It is recommended that this remain untouched.\n\n _returnDict_ : `optional [str]`\n\n > default `'both'`, can be used to get both databases or only one with `'WOS'` or `'manual'`.\n\n # Returns\n\n `bool`\n\n > `True` if the `Citation` is for a journal\n " ]
Please provide a description of the function:def FullJournalName(self): global abbrevDict if abbrevDict is None: abbrevDict = getj9dict() if self.isJournal(): return abbrevDict[self.journal][0] else: return None
[ "Returns the full name of the Citation's journal field. Requires the [j9Abbreviations](../modules/journalAbbreviations.html#metaknowledge.journalAbbreviations.backend.getj9dict) database file.\n\n **Note**: Requires the [j9Abbreviations](../modules/journalAbbreviations.html#metaknowledge.journalAbbreviations.backend.getj9dict) database file and will raise an error if it cannot be found.\n\n # Returns\n\n `str`\n\n > The first full name given for the journal of the Citation (or the first name in the WOS list if multiple names exist), if there is not one then `None` is returned\n " ]
Please provide a description of the function:def addToDB(self, manualName = None, manualDB = manualDBname, invert = False): try: if invert: d = {self.journal : ''} elif manualName is None: d = {self.journal : self.journal} else: d = {self.journal : manualName} addToDB(abbr = d, dbname = manualDB) except KeyError: raise KeyError("This citation does not have a journal field.") else: abbrevDict.update(d)
[ "Adds the journal of this Citation to the user created database of journals. This will cause [isJournal()](#metaknowledge.citation.Citation.isJournal) to return `True` for this Citation and all others with its `journal`.\n\n **Note**: Requires the [j9Abbreviations](../modules/journalAbbreviations.html#metaknowledge.journalAbbreviations.backend.getj9dict) database file and will raise an error if it cannot be found.\n\n # Parameters\n\n _manualName_ : `optional [str]`\n\n > Default `None`, the full name of journal to use. If not provided the full name will be the same as the abbreviation.\n\n _manualDB_ : `optional [str]`\n\n > The name of the manually created database file, the default is determined at run time. It is recommended that this remain untouched.\n\n _invert_ : `optional [bool]`\n\n > Default `False`, if `True` the journal will be removed instead of added\n " ]
Please provide a description of the function:def add(self, elem): if isinstance(elem, self._allowedTypes): self._collection.add(elem) self._collectedTypes.add(type(elem).__name__) else: raise CollectionTypeError("{} can only contain '{}', '{}' is not allowed.".format(type(self).__name__, self._allowedTypes, elem))
[ " Adds _elem_ to the collection.\n\n # Parameters\n\n _elem_ : `object`\n\n > The object to be added\n " ]
Please provide a description of the function:def remove(self, elem): try: return self._collection.remove(elem) except KeyError: raise KeyError("'{}' was not found in the {}: '{}'.".format(elem, type(self).__name__, self)) from None
[ "Removes _elem_ from the collection, will raise a KeyError is _elem_ is missing\n\n # Parameters\n\n _elem_ : `object`\n\n > The object to be removed\n " ]
Please provide a description of the function:def clear(self): self.bad = False self.errors = {} self._collection.clear()
[ "\"Removes all elements from the collection and resets the error handling\n " ]
Please provide a description of the function:def pop(self): try: return self._collection.pop() except KeyError: raise KeyError("Nothing left in the {}: '{}'.".format(type(self).__name__, self)) from None
[ "Removes a random element from the collection and returns it\n\n # Returns\n\n `object`\n\n > A random object from the collection\n " ]
Please provide a description of the function:def copy(self): collectedCopy = copy.copy(self) collectedCopy._collection = copy.copy(collectedCopy._collection) self._collectedTypes = copy.copy(self._collectedTypes) self._allowedTypes = copy.copy(self._allowedTypes) collectedCopy.errors = copy.copy(collectedCopy.errors) return collectedCopy
[ "Creates a shallow copy of the collection\n\n # Returns\n\n `Collection`\n\n > A copy of the `Collection`\n " ]
Please provide a description of the function:def chunk(self, maxSize): chunks = [] currentSize = maxSize + 1 for i in self: if currentSize >= maxSize: currentSize = 0 chunks.append(type(self)({i}, name = 'Chunk-{}-of-{}'.format(len(chunks), self.name), quietStart = True)) else: chunks[-1].add(i) currentSize += 1 return chunks
[ "Splits the `Collection` into _maxSize_ size or smaller `Collections`\n\n # Parameters\n\n _maxSize_ : `int`\n\n > The maximum number of elements in a retuned `Collection`\n\n\n # Returns\n\n `list [Collection]`\n\n > A list of `Collections` that if all merged (`|` operator) would create the original\n " ]
Please provide a description of the function:def split(self, maxSize): chunks = [] currentSize = maxSize + 1 try: while True: if currentSize >= maxSize: currentSize = 0 chunks.append(type(self)({self.pop()}, name = 'Chunk-{}-of-{}'.format(len(chunks), self.name), quietStart = True)) else: chunks[-1].add(self.pop()) currentSize += 1 except KeyError: self.clear() self.name = 'Emptied-{}'.format(self.name) return chunks
[ "Destructively, splits the `Collection` into _maxSize_ size or smaller `Collections`. The source `Collection` will be empty after this operation\n\n # Parameters\n\n _maxSize_ : `int`\n\n > The maximum number of elements in a retuned `Collection`\n\n # Returns\n\n `list [Collection]`\n\n > A list of `Collections` that if all merged (`|` operator) would create the original\n " ]
Please provide a description of the function:def containsID(self, idVal): for i in self: if i.id == idVal: return True return False
[ "Checks if the collected items contains the give _idVal_\n\n # Parameters\n\n _idVal_ : `str`\n\n > The queried id string\n\n # Returns\n\n `bool`\n\n > `True` if the item is in the collection\n " ]
Please provide a description of the function:def discardID(self, idVal): for i in self: if i.id == idVal: self._collection.discard(i) return
[ "Checks if the collected items contains the give _idVal_ and discards it if it is found, will not raise an exception if item is not found\n\n # Parameters\n\n _idVal_ : `str`\n\n > The discarded id string\n " ]
Please provide a description of the function:def removeID(self, idVal): for i in self: if i.id == idVal: self._collection.remove(i) return raise KeyError("A Record with the ID '{}' was not found in the RecordCollection: '{}'.".format(idVal, self))
[ "Checks if the collected items contains the give _idVal_ and removes it if it is found, will raise a `KeyError` if item is not found\n\n # Parameters\n\n _idVal_ : `str`\n\n > The removed id string\n " ]
Please provide a description of the function:def badEntries(self): badEntries = set() for i in self: if i.bad: badEntries.add(i) return type(self)(badEntries, quietStart = True)
[ "Creates a new collection of the same type with only the bad entries\n\n # Returns\n\n `CollectionWithIDs`\n\n > A collection of only the bad entries\n " ]
Please provide a description of the function:def dropBadEntries(self): self._collection = set((i for i in self if not i.bad)) self.bad = False self.errors = {}
[ "Removes all the bad entries from the collection\n " ]
Please provide a description of the function:def tags(self): tags = set() for i in self: tags |= set(i.keys()) return tags
[ "Creates a list of all the tags of the contained items\n\n # Returns\n\n `list [str]`\n\n > A list of all the tags\n " ]
Please provide a description of the function:def glimpse(self, *tags, compact = False): return _glimpse(self, *tags, compact = compact)
[ "Creates a printable table with the most frequently occurring values of each of the requested _tags_, or if none are provided the top authors, journals and citations. The table will be as wide and as tall as the terminal (or 80x24 if there is no terminal) so `print(RC.glimpse())`should always create a nice looking table. Below is a table created from some of the testing files:\n\n ```\n >>> print(RC.glimpse())\n +RecordCollection glimpse made at: 2016-01-01 12:00:00++++++++++++++++++++++++++\n |33 Records from testFile++++++++++++++++++++++++++++++++++++++++++++++++++++++|\n |Columns are ranked by num. of occurrences and are independent of one another++|\n |-------Top Authors--------+------Top Journals-------+--------Top Cited--------|\n |1 Girard, S|1 CANADIAN JOURNAL OF PH.|1 LEVY Y, 1975, OPT COMM.|\n |1 Gilles, H|1 JOURNAL OF THE OPTICAL.|2 GOOS F, 1947, ANN PHYS.|\n |2 IMBERT, C|2 APPLIED OPTICS|3 LOTSCH HKV, 1970, OPTI.|\n |2 Pillon, F|2 OPTICS COMMUNICATIONS|4 RENARD RH, 1964, J OPT.|\n |3 BEAUREGARD, OCD|2 NUOVO CIMENTO DELLA SO.|5 IMBERT C, 1972, PHYS R.|\n |3 Laroche, M|2 JOURNAL OF THE OPTICAL.|6 ARTMANN K, 1948, ANN P.|\n |3 HUARD, S|2 JOURNAL OF THE OPTICAL.|6 COSTADEB.O, 1973, PHYS.|\n |4 PURI, A|2 NOUVELLE REVUE D OPTIQ.|6 ROOSEN G, 1973, CR ACA.|\n |4 COSTADEB.O|3 PHYSICS REPORTS-REVIEW.|7 Imbert C., 1972, Nouve.|\n |4 PATTANAYAK, DN|3 PHYSICAL REVIEW LETTERS|8 HOROWITZ BR, 1971, J O.|\n |4 Gazibegovic, A|3 USPEKHI FIZICHESKIKH N.|8 BRETENAKER F, 1992, PH.|\n |4 ROOSEN, G|3 APPLIED PHYSICS B-LASE.|8 SCHILLIN.H, 1965, ANN .|\n |4 BIRMAN, JL|3 AEU-INTERNATIONAL JOUR.|8 FEDOROV FI, 1955, DOKL.|\n |4 Kaiser, R|3 COMPTES RENDUS HEBDOMA.|8 MAZET A, 1971, CR ACAD.|\n |5 LEVY, Y|3 CHINESE PHYSICS LETTERS|9 IMBERT C, 1972, CR ACA.|\n |5 BEAUREGA.OC|3 PHYSICAL REVIEW B|9 LOTSCH HKV, 1971, OPTI.|\n |5 PAVLOV, VI|3 LETTERE AL NUOVO CIMEN.|9 ASHBY N, 1973, PHYS RE.|\n |5 BREVIK, I|3 PROGRESS IN QUANTUM EL.|9 BOULWARE DG, 1973, PHY.|\n >>>\n ```\n\n # Parameters\n\n _tags_ : `str, str, ...`\n\n > Any number of tag strings to be made into columns in the output table\n\n # Returns\n\n `str`\n\n > A string containing the table\n " ]
Please provide a description of the function:def rankedSeries(self, tag, outputFile = None, giveCounts = True, giveRanks = False, greatestFirst = True, pandasMode = True, limitTo = None): if giveRanks and giveCounts: raise mkException("rankedSeries cannot return counts and ranks only one of giveRanks or giveCounts can be True.") seriesDict = {} for R in self: #This should be faster than using get, since get is a wrapper for __getitem__ try: val = R[tag] except KeyError: continue if not isinstance(val, list): val = [val] for entry in val: if limitTo and entry not in limitTo: continue if entry in seriesDict: seriesDict[entry] += 1 else: seriesDict[entry] = 1 seriesList = sorted(seriesDict.items(), key = lambda x: x[1], reverse = greatestFirst) if outputFile is not None: with open(outputFile, 'w') as f: writer = csv.writer(f, dialect = 'excel') writer.writerow((str(tag), 'count')) writer.writerows(seriesList) if giveCounts and not pandasMode: return seriesList elif giveRanks or pandasMode: if not greatestFirst: seriesList.reverse() currentRank = 1 retList = [] panDict = {'entry' : [], 'count' : [], 'rank' : []} try: currentCount = seriesList[0][1] except IndexError: #Empty series so no need to loop pass else: for valString, count in seriesList: if currentCount > count: currentRank += 1 currentCount = count if pandasMode: panDict['entry'].append(valString) panDict['count'].append(count) panDict['rank'].append(currentRank) else: retList.append((valString, currentRank)) if not greatestFirst: retList.reverse() if pandasMode: return panDict else: return retList else: return [e for e,c in seriesList]
[ "Creates an pandas dict of the ordered list of all the values of _tag_, with and ranked by their number of occurrences. A list can also be returned with the the counts or ranks added or it can be written to a file.\n\n # Parameters\n\n _tag_ : `str`\n\n > The tag to be ranked\n\n _outputFile_ : `optional str`\n\n > A file path to write a csv with 2 columns, one the tag values the other their counts\n\n _giveCounts_ : `optional bool`\n\n > Default `True`, if `True` the retuned list will be composed of tuples the first values being the tag value and the second their counts. This supersedes _giveRanks_.\n\n _giveRanks_ : `optional bool`\n\n > Default `False`, if `True` and _giveCounts_ is `False`, the retuned list will be composed of tuples the first values being the tag value and the second their ranks. This is superseded by _giveCounts_.\n\n _greatestFirst_ : `optional bool`\n\n > Default `True`, if `True` the returned list will be ordered with the highest ranked value first, otherwise the lowest ranked will be first.\n\n _pandasMode_ : `optional bool`\n\n > Default `True`, if `True` a `dict` ready for pandas will be returned, otherwise a list\n\n _limitTo_ : `optional list[values]`\n\n > Default `None`, if a list is provided only those values in the list will be counted or returned\n\n # Returns\n\n `dict[str:list[value]] or list[str]`\n\n > A `dict` or `list` will be returned depending on if _pandasMode_ is `True`\n " ]
Please provide a description of the function:def timeSeries(self, tag = None, outputFile = None, giveYears = True, greatestFirst = True, limitTo = False, pandasMode = True): seriesDict = {} for R in self: #This should be faster than using get, since get is a wrapper for __getitem__ try: year = R['year'] except KeyError: continue if tag is None: seriesDict[R] = {year : 1} else: try: val = R[tag] except KeyError: continue if not isinstance(val, list): val = [val] for entry in val: if limitTo and entry not in limitTo: continue if entry in seriesDict: try: seriesDict[entry][year] += 1 except KeyError: seriesDict[entry][year] = 1 else: seriesDict[entry] = {year : 1} seriesList = [] for e, yd in seriesDict.items(): seriesList += [(e, y) for y in yd.keys()] seriesList = sorted(seriesList, key = lambda x: x[1], reverse = greatestFirst) if outputFile is not None: with open(outputFile, 'w') as f: writer = csv.writer(f, dialect = 'excel') writer.writerow((str(tag), 'years')) writer.writerows(((k,'|'.join((str(y) for y in v))) for k,v in seriesDict.items())) if pandasMode: panDict = {'entry' : [], 'count' : [], 'year' : []} for entry, year in seriesList: panDict['entry'].append(entry) panDict['year'].append(year) panDict['count'].append(seriesDict[entry][year]) return panDict elif giveYears: return seriesList else: return [e for e,c in seriesList]
[ "Creates an pandas dict of the ordered list of all the values of _tag_, with and ranked by the year the occurred in, multiple year occurrences will create multiple entries. A list can also be returned with the the counts or years added or it can be written to a file.\n\n If no _tag_ is given the `Records` in the collection will be used\n\n # Parameters\n\n _tag_ : `optional str`\n\n > Default `None`, if provided the tag will be ordered\n\n _outputFile_ : `optional str`\n\n > A file path to write a csv with 2 columns, one the tag values the other their years\n\n _giveYears_ : `optional bool`\n\n > Default `True`, if `True` the retuned list will be composed of tuples the first values being the tag value and the second their years.\n\n _greatestFirst_ : `optional bool`\n\n > Default `True`, if `True` the returned list will be ordered with the highest years first, otherwise the lowest years will be first.\n\n _pandasMode_ : `optional bool`\n\n > Default `True`, if `True` a `dict` ready for pandas will be returned, otherwise a list\n\n _limitTo_ : `optional list[values]`\n\n > Default `None`, if a list is provided only those values in the list will be counted or returned\n\n # Returns\n\n `dict[str:list[value]] or list[str]`\n\n > A `dict` or `list` will be returned depending on if _pandasMode_ is `True`\n " ]
Please provide a description of the function:def cooccurrenceCounts(self, keyTag, *countedTags): if not isinstance(keyTag, str): raise TagError("'{}' is not a string it cannot be used as a tag.".format(keyTag)) if len(countedTags) < 1: TagError("You need to provide atleast one tag") for tag in countedTags: if not isinstance(tag, str): raise TagError("'{}' is not a string it cannot be used as a tag.".format(tag)) occurenceDict = {} progArgs = (0, "Starting to count the co-occurrences of '{}' and' {}'".format(keyTag, "','".join(countedTags))) if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: for i, R in enumerate(self): PBar.updateVal(i / len(self), "Analyzing {}".format(R)) keyVal = R.get(keyTag) if keyVal is None: continue if not isinstance(keyVal, list): keyVal = [keyVal] for key in keyVal: if key not in occurenceDict: occurenceDict[key] = {} for tag in countedTags: tagval = R.get(tag) if tagval is None: continue if not isinstance(tagval, list): tagval = [tagval] for val in tagval: for key in keyVal: try: occurenceDict[key][val] += 1 except KeyError: occurenceDict[key][val] = 1 PBar.finish("Done extracting the co-occurrences of '{}' and '{}'".format(keyTag, "','".join(countedTags))) return occurenceDict
[ "Counts the number of times values from any of the _countedTags_ occurs with _keyTag_. The counts are retuned as a dictionary with the values of _keyTag_ mapping to dictionaries with each of the _countedTags_ values mapping to thier counts.\n\n # Parameters\n\n _keyTag_ : `str`\n\n > The tag used as the key for the returned dictionary\n\n _*countedTags_ : `str, str, str, ...`\n\n > The tags used as the key for the returned dictionary's values\n\n # Returns\n\n `dict[str:dict[str:int]]`\n\n > The dictionary of counts\n " ]
Please provide a description of the function:def networkMultiLevel(self, *modes, nodeCount = True, edgeWeight = True, stemmer = None, edgeAttribute = None, nodeAttribute = None, _networkTypeString = 'n-level network'): stemCheck = False if stemmer is not None: if isinstance(stemmer, collections.abc.Callable): stemCheck = True else: raise TagError("stemmer must be callable, e.g. a function or class with a __call__ method.") count = 0 progArgs = (0, "Starting to make a {} from {}".format(_networkTypeString, modes)) if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if edgeAttribute is not None: grph = nx.MultiGraph() else: grph = nx.Graph() for R in self: if PBar: count += 1 PBar.updateVal(count / len(self), "Analyzing: " + str(R)) if edgeAttribute: edgeVals = [str(v) for v in R.get(edgeAttribute, [])] if nodeAttribute: nodeVals = [str(v) for v in R.get(nodeAttribute, [])] contents = [] for attr in modes: tmpContents = R.get(attr, []) if isinstance(tmpContents, list): contents += tmpContents else: contents.append(tmpContents) if contents is not None: if not isinstance(contents, str) and isinstance(contents, collections.abc.Iterable): if stemCheck: tmplst = [stemmer(str(n)) for n in contents] else: tmplst = [str(n) for n in contents] if len(tmplst) > 1: for i, node1 in enumerate(tmplst): for node2 in tmplst[i + 1:]: if edgeAttribute: for edgeVal in edgeVals: if grph.has_edge(node1, node2, key = edgeVal): if edgeWeight: for i, a in grph.edges[node1, node2].items(): if a['key'] == edgeVal: grph[node1][node2][i]['weight'] += 1 break else: if edgeWeight: attrDict = {'key' : edgeVal, 'weight' : 1} else: attrDict = {'key' : edgeVal} grph.add_edge(node1, node2, **attrDict) elif edgeWeight: try: grph.edges[node1, node2]['weight'] += 1 except KeyError: grph.add_edge(node1, node2, weight = 1) else: if not grph.has_edge(node1, node2): grph.add_edge(node1, node2) if not grph.has_node(node1): grph.add_node(node1) if nodeCount: try: grph.node[node1]['count'] += 1 except KeyError: grph.node[node1]['count'] = 1 if nodeAttribute: try: currentAttrib = grph.node[node1][nodeAttribute] except KeyError: grph.node[node1][nodeAttribute] = nodeVals else: for nodeValue in (n for n in nodeVals if n not in currentAttrib): grph.node[node1][nodeAttribute].append(nodeValue) elif len(tmplst) == 1: if nodeCount: try: grph.node[tmplst[0]]['count'] += 1 except KeyError: grph.add_node(tmplst[0], count = 1) else: if not grph.has_node(tmplst[0]): grph.add_node(tmplst[0]) if nodeAttribute: try: currentAttrib = grph.node[tmplst[0]][nodeAttribute] except KeyError: grph.node[tmplst[0]][nodeAttribute] = nodeVals else: for nodeValue in (n for n in nodeVals if n not in currentAttrib): grph.node[tmplst[0]][nodeAttribute].append(nodeValue) else: pass else: if stemCheck: nodeVal = stemmer(str(contents)) else: nodeVal = str(contents) if nodeCount: try: grph.node[nodeVal]['count'] += 1 except KeyError: grph.add_node(nodeVal, count = 1) else: if not grph.has_node(nodeVal): grph.add_node(nodeVal) if nodeAttribute: try: currentAttrib = grph.node[nodeVal][nodeAttribute] except KeyError: grph.node[nodeVal][nodeAttribute] = nodeVals else: for nodeValue in (n for n in nodeVals if n not in currentAttrib): grph.node[nodeVal][nodeAttribute].append(nodeValue) if PBar: PBar.finish("Done making a {} from {}".format(_networkTypeString, modes)) return grph
[ "Creates a network of the objects found by any number of tags _modes_, with edges between all co-occurring values. IF you only want edges between co-occurring values from different tags use [networkMultiMode()](#metaknowledge.CollectionWithIDs.networkMultiMode).\n\n A **networkMultiLevel**() looks are each entry in the collection and extracts its values for the tag given by each of the _modes_, e.g. the `'authorsFull'` tag. Then if multiple are returned an edge is created between them. So in the case of the author tag `'authorsFull'` a co-authorship network is created. Then for each other tag the entries are also added and edges between the first tag's node and theirs are created.\n\n The number of times each object occurs is count if _nodeCount_ is `True` and the edges count the number of co-occurrences if _edgeWeight_ is `True`. Both are`True` by default.\n\n **Note** Do not use this for the construction of co-citation networks use [Recordcollection.networkCoCitation()](./classes/RecordCollection.html#metaknowledge.RecordCollection.networkCoCitation) it is more accurate and has more options.\n\n # Parameters\n\n _mode_ : `str`\n\n > A two character WOS tag or one of the full names for a tag\n\n _nodeCount_ : `optional [bool]`\n\n > Default `True`, if `True` each node will have an attribute called \"count\" that contains an int giving the number of time the object occurred.\n\n _edgeWeight_ : `optional [bool]`\n\n > Default `True`, if `True` each edge will have an attribute called \"weight\" that contains an int giving the number of time the two objects co-occurrenced.\n\n _stemmer_ : `optional [func]`\n\n > Default `None`, If _stemmer_ is a callable object, basically a function or possibly a class, it will be called for the ID of every node in the graph, all IDs are strings. For example:\n\n > The function ` f = lambda x: x[0]` if given as the stemmer will cause all IDs to be the first character of their unstemmed IDs. e.g. the title `'Goos-Hanchen and Imbert-Fedorov shifts for leaky guided modes'` will create the node `'G'`.\n\n # Returns\n\n `networkx Graph`\n\n > A networkx Graph with the objects of the tag _mode_ as nodes and their co-occurrences as edges\n " ]
Please provide a description of the function:def networkOneMode(self, mode, nodeCount = True, edgeWeight = True, stemmer = None, edgeAttribute = None, nodeAttribute = None): return self.networkMultiLevel(mode, nodeCount = nodeCount, edgeWeight = edgeWeight, stemmer = stemmer, edgeAttribute = edgeAttribute, nodeAttribute = nodeAttribute, _networkTypeString = 'one mode network')
[ "Creates a network of the objects found by one tag _mode_. This is the same as [networkMultiLevel()](#metaknowledge.CollectionWithIDs.networkMultiLevel) with only one tag.\n\n A **networkOneMode**() looks are each entry in the collection and extracts its values for the tag given by _mode_, e.g. the `'authorsFull'` tag. Then if multiple are returned an edge is created between them. So in the case of the author tag `'authorsFull'` a co-authorship network is created.\n\n The number of times each object occurs is count if _nodeCount_ is `True` and the edges count the number of co-occurrences if _edgeWeight_ is `True`. Both are`True` by default.\n\n **Note** Do not use this for the construction of co-citation networks use [Recordcollection.networkCoCitation()](./classes/RecordCollection.html#metaknowledge.RecordCollection.networkCoCitation) it is more accurate and has more options.\n\n # Parameters\n\n _mode_ : `str`\n\n > A two character WOS tag or one of the full names for a tag\n\n _nodeCount_ : `optional [bool]`\n\n > Default `True`, if `True` each node will have an attribute called \"count\" that contains an int giving the number of time the object occurred.\n\n _edgeWeight_ : `optional [bool]`\n\n > Default `True`, if `True` each edge will have an attribute called \"weight\" that contains an int giving the number of time the two objects co-occurrenced.\n\n _stemmer_ : `optional [func]`\n\n > Default `None`, If _stemmer_ is a callable object, basically a function or possibly a class, it will be called for the ID of every node in the graph, all IDs are strings. For example:\n\n > The function ` f = lambda x: x[0]` if given as the stemmer will cause all IDs to be the first character of their unstemmed IDs. e.g. the title `'Goos-Hanchen and Imbert-Fedorov shifts for leaky guided modes'` will create the node `'G'`.\n\n # Returns\n\n `networkx Graph`\n\n > A networkx Graph with the objects of the tag _mode_ as nodes and their co-occurrences as edges\n " ]
Please provide a description of the function:def networkTwoMode(self, tag1, tag2, directed = False, recordType = True, nodeCount = True, edgeWeight = True, stemmerTag1 = None, stemmerTag2 = None, edgeAttribute = None): if not isinstance(tag1, str): raise TagError("{} is not a string it cannot be a tag.".format(tag1)) if not isinstance(tag2, str): raise TagError("{} is not a string it cannot be a tag.".format(tag2)) if stemmerTag1 is not None: if isinstance(stemmerTag1, collections.abc.Callable): stemCheck = True else: raise TagError("stemmerTag1 must be callable, e.g. a function or class with a __call__ method.") else: stemmerTag1 = lambda x: x if stemmerTag2 is not None: if isinstance(stemmerTag2, collections.abc.Callable): stemCheck = True else: raise TagError("stemmerTag2 must be callable, e.g. a function or class with a __call__ method.") else: stemmerTag2 = lambda x: x count = 0 progArgs = (0, "Starting to make a two mode network of " + tag1 + " and " + tag2) if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if edgeAttribute is not None: if directed: grph = nx.MultiDiGraph() else: grph = nx.MultiGraph() else: if directed: grph = nx.DiGraph() else: grph = nx.Graph() for R in self: if PBar: count += 1 PBar.updateVal(count / len(self), "Analyzing: {}".format(R)) if edgeAttribute is not None: edgeVals = R.get(edgeAttribute, []) if not isinstance(edgeVals, list): edgeVals = [edgeVals] contents1 = R.get(tag1) contents2 = R.get(tag2) if isinstance(contents1, list): contents1 = [stemmerTag1(str(v)) for v in contents1] elif contents1 == None: contents1 = [] else: contents1 = [stemmerTag1(str(contents1))] if isinstance(contents2, list): contents2 = [stemmerTag2(str(v)) for v in contents2] elif contents2 == None: contents2 = [] else: contents2 = [stemmerTag2(str(contents2))] for node1 in contents1: for node2 in contents2: if edgeAttribute: for edgeVal in edgeVals: if grph.has_edge(node1, node2, key = edgeVal): if edgeWeight: grph.edges[node1, node2, edgeVal]['weight'] += 1 else: if edgeWeight: attrDict = {'key' : edgeVal, 'weight' : 1} else: attrDict = {'key' : edgeVal} grph.add_edge(node1, node2, **attrDict) elif edgeWeight: try: grph.edges[node1, node2]['weight'] += 1 except KeyError: grph.add_edge(node1, node2, weight = 1) else: if not grph.has_edge(node1, node2): grph.add_edge(node1, node2) if nodeCount: try: grph.node[node1]['count'] += 1 except KeyError: try: grph.node[node1]['count'] = 1 if recordType: grph.node[node1]['type'] = tag1 except KeyError: if recordType: grph.add_node(node1, type = tag1) else: grph.add_node(node1) else: if not grph.has_node(node1): if recordType: grph.add_node(node1, type = tag1) else: grph.add_node(node1) elif recordType: if 'type' not in grph.node[node1]: grph.node[node1]['type'] = tag1 for node2 in contents2: if nodeCount: try: grph.node[node2]['count'] += 1 except KeyError: try: grph.node[node2]['count'] = 1 if recordType: grph.node[node2]['type'] = tag2 except KeyError: grph.add_node(node2, count = 1) if recordType: grph.node[node2]['type'] = tag2 else: if not grph.has_node(node2): if recordType: grph.add_node(node2, type = tag2) else: grph.add_node(node2) elif recordType: if 'type' not in grph.node[node2]: grph.node[node2]['type'] = tag2 if PBar: PBar.finish("Done making a two mode network of " + tag1 + " and " + tag2) return grph
[ "Creates a network of the objects found by two WOS tags _tag1_ and _tag2_, each node marked by which tag spawned it making the resultant graph bipartite.\n\n A **networkTwoMode()** looks at each Record in the `RecordCollection` and extracts its values for the tags given by _tag1_ and _tag2_, e.g. the `'WC'` and `'LA'` tags. Then for each object returned by each tag and edge is created between it and every other object of the other tag. So the WOS defined subject tag `'WC'` and language tag `'LA'`, will give a two-mode network showing the connections between subjects and languages. Each node will have an attribute call `'type'` that gives the tag that created it or both if both created it, e.g. the node `'English'` would have the type attribute be `'LA'`.\n\n The number of times each object occurs is count if _nodeCount_ is `True` and the edges count the number of co-occurrences if _edgeWeight_ is `True`. Both are`True` by default.\n\n The _directed_ parameter if `True` will cause the network to be directed with the first tag as the source and the second as the destination.\n\n # Parameters\n\n _tag1_ : `str`\n\n > A two character WOS tag or one of the full names for a tag, the source of edges on the graph\n\n _tag1_ : `str`\n\n > A two character WOS tag or one of the full names for a tag, the target of edges on the graph\n\n _directed_ : `optional [bool]`\n\n > Default `False`, if `True` the returned network is directed\n\n _nodeCount_ : `optional [bool]`\n\n > Default `True`, if `True` each node will have an attribute called \"count\" that contains an int giving the number of time the object occurred.\n\n _edgeWeight_ : `optional [bool]`\n\n > Default `True`, if `True` each edge will have an attribute called \"weight\" that contains an int giving the number of time the two objects co-occurrenced.\n\n _stemmerTag1_ : `optional [func]`\n\n > Default `None`, If _stemmerTag1_ is a callable object, basically a function or possibly a class, it will be called for the ID of every node given by _tag1_ in the graph, all IDs are strings.\n\n > For example: the function `f = lambda x: x[0]` if given as the stemmer will cause all IDs to be the first character of their unstemmed IDs. e.g. the title `'Goos-Hanchen and Imbert-Fedorov shifts for leaky guided modes'` will create the node `'G'`.\n\n _stemmerTag2_ : `optional [func]`\n\n > Default `None`, see _stemmerTag1_ as it is the same but for _tag2_\n\n # Returns\n\n `networkx Graph or networkx DiGraph`\n\n > A networkx Graph with the objects of the tags _tag1_ and _tag2_ as nodes and their co-occurrences as edges.\n " ]
Please provide a description of the function:def networkMultiMode(self, *tags, recordType = True, nodeCount = True, edgeWeight = True, stemmer = None, edgeAttribute = None): if len(tags) == 1: if not isinstance(tags[0], str): try: tags = list(tags[0]) except TypeError: raise TagError("'{}' is not a string it cannot be a tag.".format(tags[0])) for t in (i for i in tags if not isinstance(i, str)): raise TagError("{} is not a string it cannot be a tag.".format(t)) stemCheck = False if stemmer is not None: if isinstance(stemmer, collections.abc.Callable): stemCheck = True else: raise TagError("stemmer must be Callable, e.g. a function or class with a __call__ method.") count = 0 progArgs = (0, "Starting to make a " + str(len(tags)) + "-mode network of: " + ', '.join(tags)) if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if edgeAttribute is not None: grph = nx.MultiGraph() else: grph = nx.Graph() for R in self: if PBar: count += 1 PBar.updateVal(count / len(self), "Analyzing: " + str(R)) contents = [] for t in tags: tmpVal = R.get(t) if stemCheck: if tmpVal: if isinstance(tmpVal, list): contents.append((t, [stemmer(str(v)) for v in tmpVal])) else: contents.append((t, [stemmer(str(tmpVal))])) else: if tmpVal: if isinstance(tmpVal, list): contents.append((t, [str(v) for v in tmpVal])) else: contents.append((t, [str(tmpVal)])) for i, vlst1 in enumerate(contents): for node1 in vlst1[1]: for vlst2 in contents[i + 1:]: for node2 in vlst2[1]: if edgeAttribute: for edgeVal in edgeVals: if grph.has_edge(node1, node2, key = edgeVal): if edgeWeight: for i, a in grph.edges[node1, node2].items(): if a['key'] == edgeVal: grph[node1][node2][i]['weight'] += 1 break else: if edgeWeight: attrDict = {'key' : edgeVal, 'weight' : 1} else: attrDict = {'key' : edgeVal} grph.add_edge(node1, node2, **attrDict) elif edgeWeight: try: grph.edges[node1, node2]['weight'] += 1 except KeyError: grph.add_edge(node1, node2, weight = 1) else: if not grph.has_edge(node1, node2): grph.add_edge(node1, node2) if nodeCount: try: grph.node[node1]['count'] += 1 except KeyError: try: grph.node[node1]['count'] = 1 if recordType: grph.node[node1]['type'] = vlst1[0] except KeyError: if recordType: grph.add_node(node1, type = vlst1[0]) else: grph.add_node(node1) else: if not grph.has_node(node1): if recordType: grph.add_node(node1, type = vlst1[0]) else: grph.add_node(node1) elif recordType: try: grph.node[node1]['type'] += vlst1[0] except KeyError: grph.node[node1]['type'] = vlst1[0] if PBar: PBar.finish("Done making a {}-mode network of: {}".format(len(tags), ', '.join(tags))) return grph
[ "Creates a network of the objects found by all tags in _tags_, each node is marked by which tag spawned it making the resultant graph n-partite.\n\n A **networkMultiMode()** looks are each item in the collection and extracts its values for the tags given by _tags_. Then for all objects returned an edge is created between them, regardless of their type. Each node will have an attribute call `'type'` that gives the tag that created it or both if both created it, e.g. if `'LA'` were in _tags_ node `'English'` would have the type attribute be `'LA'`.\n\n For example if _tags_ was set to `['CR', 'UT', 'LA']`, a three mode network would be created, composed of a co-citation network from the `'CR'` tag. Then each citation would also have edges to all the languages of Records that cited it and to the WOS number of the those Records.\n\n The number of times each object occurs is count if _nodeCount_ is `True` and the edges count the number of co-occurrences if _edgeWeight_ is `True`. Both are`True` by default.\n\n # Parameters\n\n _tags_ : `str`, `str`, `str`, ... or `list [str]`\n\n > Any number of tags, or a list of tags\n\n _nodeCount_ : `optional [bool]`\n\n > Default `True`, if `True` each node will have an attribute called `'count'` that contains an int giving the number of time the object occurred.\n\n _edgeWeight_ : `optional [bool]`\n\n > Default `True`, if `True` each edge will have an attribute called `'weight'` that contains an int giving the number of time the two objects co-occurrenced.\n\n _stemmer_ : `optional [func]`\n\n > Default `None`, If _stemmer_ is a callable object, basically a function or possibly a class, it will be called for the ID of every node in the graph, note that all IDs are strings.\n\n > For example: the function `f = lambda x: x[0]` if given as the stemmer will cause all IDs to be the first character of their unstemmed IDs. e.g. the title `'Goos-Hanchen and Imbert-Fedorov shifts for leaky guided modes'` will create the node `'G'`.\n\n # Returns\n\n `networkx Graph`\n\n > A networkx Graph with the objects of the tags _tags_ as nodes and their co-occurrences as edges\n " ]
Please provide a description of the function:def diffusionGraph(source, target, weighted = True, sourceType = "raw", targetType = "raw", labelEdgesBy = None): if sourceType != "raw" and sourceType not in tagsAndNameSet: raise RuntimeError("{} is not a valid node type, only 'raw' or those strings in tagsAndNameSet are allowed".format(sourceType)) if targetType != "raw" and targetType not in tagsAndNameSet: raise RuntimeError("{} is not a valid node type, only 'raw' or those strings in tagsAndNameSet are allowed".format(targetType)) if labelEdgesBy is not None: try: normVal = normalizeToTag(labelEdgesBy) except KeyError: raise RuntimeError ("{} is not a known tag, only tags in tagsAndNameSet are allowed.".format(labelEdgesBy)) else: labelEdgesBy = normVal count = 0 maxCount = len(source) progArgs = (0, "Starting to make a diffusion network") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: sourceDict = {} if labelEdgesBy is None: workingGraph = nx.DiGraph() else: workingGraph = nx.MultiDiGraph() for Rs in source: if PBar: count += 1 PBar.updateVal(count / maxCount * .25, "Analyzing source: " + str(Rs)) RsVal, RsExtras = makeNodeID(Rs, sourceType) if RsVal: sourceDict[Rs.createCitation()] = RsVal for val in RsVal: if val not in workingGraph: workingGraph.add_node(val, source = True, target = False, **RsExtras) if PBar: count = 0 maxCount = len(target) PBar.updateVal(.25, "Done analyzing sources, starting on targets") for Rt in target: RtVal, RtExtras = makeNodeID(Rt, targetType) if labelEdgesBy is not None: edgeVals = Rt.get(labelEdgesBy) if edgeVals is None: continue if not isinstance(edgeVals, list): edgeVals = [edgeVals] if PBar: count += 1 PBar.updateVal(count / maxCount * .75 + .25, "Analyzing target: " + str(Rt)) if RtVal: for val in RtVal: if val not in workingGraph: workingGraph.add_node(val, source = False, target = True, **RtExtras) else: workingGraph.node[val]["target"] = True targetCites = Rt.get('CR') if targetCites: for Rs in (sourceDict[c] for c in targetCites if c in sourceDict): for sVal in Rs: if labelEdgesBy is not None: for edgeVal in (str(ev) for ev in edgeVals): if weighted: if workingGraph.has_edge(sVal, val, key = edgeVal): for i, a in workingGraph[sVal][val].items(): if a['key'] == edgeVal: workingGraph[sVal][val][i]['weight'] += 1 break else: attrDict = {'key' : edgeVal, 'weight' : 1} workingGraph.add_edge(sVal, val, attr_dict = attrDict) else: if not workingGraph.has_edge(sVal, val, key = edgeVal): workingGraph.add_edge(sVal, val, key = edgeVal) else: if weighted: try: workingGraph[sVal][val]['weight'] += 1 except KeyError: workingGraph.add_edge(sVal, val, weight = 1) else: workingGraph.add_edge(sVal, val) if PBar: PBar.finish("Done making a diffusion network of {} sources and {} targets".format(len(source), len(target))) return workingGraph
[ "Takes in two [RecordCollections](../classes/RecordCollection.html#metaknowledge.RecordCollection) and produces a graph of the citations of _source_ by the [Records](../classes/Record.html#metaknowledge.Record) in _target_. By default the nodes in the are `Record` objects but this can be changed with the _sourceType_ and _targetType_ keywords. The edges of the graph go from the target to the source.\n\n Each node on the output graph has two boolean attributes, `\"source\"` and `\"target\"` indicating if they are targets or sources. Note, if the types of the sources and targets are different the attributes will not be checked for overlap of the other type. e.g. if the source type is `'TI'` (title) and the target type is `'UT'` (WOS number), and there is some overlap of the targets and sources. Then the Record corresponding to a source node will not be checked for being one of the titles of the targets, only its WOS number will be considered.\n\n # Parameters\n\n _source_ : `RecordCollection`\n\n > A metaknowledge `RecordCollection` containing the `Records` being cited\n\n _target_ : `RecordCollection`\n\n > A metaknowledge `RecordCollection` containing the `Records` citing those in _source_\n\n _weighted_ : `optional [bool]`\n\n > Default `True`, if `True` each edge will have an attribute `'weight'` giving the number of times the source has referenced the target.\n\n _sourceType_ : `optional [str]`\n\n > Default `'raw'`, if `'raw'` the returned graph will contain `Records` as source nodes.\n\n > If Records are not wanted then it can be set to a WOS tag, such as `'SO'` (for journals ), to make the nodes into the type of object returned by that tag from Records.\n\n _targetType_ : `optional [str]`\n\n > Default `'raw'`, if `'raw'` the returned graph will contain `Records` as target nodes.\n\n > If Records are not wanted then it can be set to a WOS tag, such as `'SO'` (for journals ), to make the nodes into the type of object returned by that tag from Records.\n\n _labelEdgesBy_ : `optional [str]`\n\n > Default `None`, if a WOS tag (or long name of WOS tag) then the edges of the output graph will have a attribute `'key'` that is the value of the referenced tag, of source `Record`, i.e. if `'PY'` is given then each edge will have a `'key'` value equal to the publication year of the source.\n\n > This option will cause the output graph to be an `MultiDiGraph` and is likely to result in parallel edges. If a `Record` has multiple values for at tag (e.g. `'AF'`) the each tag will create its own edge.\n\n # Returns\n\n `networkx Directed Graph or networkx multi Directed Graph`\n\n > A directed graph of the diffusion network, _labelEdgesBy_ is used the graph will allow parallel edges.\n " ]
Please provide a description of the function:def diffusionCount(source, target, sourceType = "raw", extraValue = None, pandasFriendly = False, compareCounts = False, numAuthors = True, useAllAuthors = True, _ProgBar = None, extraMapping = None): sourceCountString = "SourceCount" targetCountString = "TargetCount" if not isinstance(sourceType, str): raise RuntimeError("{} is not a valid node type, only tags or the string 'raw' are allowed".format(sourceType)) if not isinstance(source, RecordCollection) or not isinstance(target, RecordCollection): raise RuntimeError("Source and target must be RecordCollections.") if extraValue is not None and not isinstance(extraValue, str): raise RuntimeError("{} is not a valid extraValue, only tags are allowed".format(extraValue)) if extraMapping is None: extraMapping = lambda x : x if metaknowledge.VERBOSE_MODE or _ProgBar: if _ProgBar: PBar = _ProgBar PBar.updateVal(0, "Starting to analyse a diffusion network") else: PBar = _ProgressBar(0, "Starting to analyse a diffusion network") count = 0 maxCount = len(source) else: PBar = _ProgressBar("Starting to analyse a diffusion network", dummy = True) count = 0 maxCount = len(source) sourceDict = {} #Tells the function if the IDs are made of lists or of str listIds = None for Rs in source: if listIds is None and Rs.get(sourceType) is not None: listIds = isinstance(Rs.get(sourceType), list) count += 1 PBar.updateVal(count / maxCount * .10, "Analyzing source: " + str(Rs)) RsVal, RsExtras = makeNodeID(Rs, sourceType) if RsVal: if useAllAuthors: for c in Rs.createCitation(multiCite = True): sourceDict[c] = RsVal else: sourceDict[Rs.createCitation()] = RsVal if extraValue is not None: if listIds: sourceCounts = {s : {targetCountString : 0} for s in itertools.chain.from_iterable(sourceDict.values())} else: sourceCounts = {s : {targetCountString : 0} for s in sourceDict.values()} else: if listIds: sourceCounts = {s : 0 for s in itertools.chain.from_iterable(sourceDict.values())} else: sourceCounts = {s : 0 for s in sourceDict.values()} count = 0 maxCount = len(target) PBar.updateVal(.10, "Done analyzing sources, starting on targets") for Rt in target: count += 1 PBar.updateVal(count / maxCount * .90 + .10, "Analyzing target: {}".format(Rt)) targetCites = Rt.get('citations', []) if extraValue is not None: values = Rt.get(extraValue, []) if values is None: values = [] elif not isinstance(values, list): values = [values] values = [extraMapping(val) for val in values] for c in targetCites: try: RsourceVals = sourceDict[c] except KeyError: continue if listIds: for sVal in RsourceVals: if extraValue: sourceCounts[sVal][targetCountString] += 1 for val in values: try: sourceCounts[sVal][val] += 1 except KeyError: sourceCounts[sVal][val] = 1 else: sourceCounts[sVal] += 1 else: if extraValue: sourceCounts[RsourceVals][targetCountString] += 1 for val in values: try: sourceCounts[RsourceVals][val] += 1 except KeyError: sourceCounts[RsourceVals][val] = 1 else: sourceCounts[RsourceVals] += 1 if compareCounts: localCounts = diffusionCount(source, source, sourceType = sourceType, pandasFriendly = False, compareCounts = False, extraValue = extraValue, _ProgBar = PBar) if PBar and not _ProgBar: PBar.finish("Done counting the diffusion of {} sources into {} targets".format(len(source), len(target))) if pandasFriendly: retDict = {targetCountString : []} if numAuthors: retDict["numAuthors"] = [] if compareCounts: retDict[sourceCountString] = [] if extraValue is not None: retDict[extraValue] = [] if sourceType == 'raw': retrievedFields = [] targetCount = [] for R in sourceCounts.keys(): tagsLst = [t for t in R.keys() if t not in retrievedFields] retrievedFields += tagsLst for tag in retrievedFields: retDict[tag] = [] for R, occ in sourceCounts.items(): if extraValue: Rvals = R.subDict(retrievedFields) for extraVal, occCount in occ.items(): retDict[extraValue].append(extraVal) if numAuthors: retDict["numAuthors"].append(len(R.get('authorsShort'))) for tag in retrievedFields: retDict[tag].append(Rvals[tag]) retDict[targetCountString].append(occCount) if compareCounts: try: retDict[sourceCountString].append(localCounts[R][extraVal]) except KeyError: retDict[sourceCountString].append(0) else: Rvals = R.subDict(retrievedFields) if numAuthors: retDict["numAuthors"].append(len(R.get('authorsShort'))) for tag in retrievedFields: retDict[tag].append(Rvals[tag]) retDict[targetCountString].append(occ) if compareCounts: retDict[sourceCountString].append(localCounts[R]) else: countLst = [] recLst = [] locLst = [] if extraValue: extraValueLst = [] for R, occ in sourceCounts.items(): if extraValue: for extraVal, occCount in occ.items(): countLst.append(occCount) recLst.append(R) extraValueLst.append(extraVal) if compareCounts: try: locLst.append(localCounts[R][extraValue]) except KeyError: locLst.append(0) else: countLst.append(occ) recLst.append(R) if compareCounts: locLst.append(localCounts[R]) if compareCounts: retDict = {sourceType : recLst, targetCountString : countLst, sourceCountString : locLst} else: retDict = {sourceType : recLst, targetCountString : countLst} if extraValue: retDict[extraValue] = extraValueLst return retDict else: if compareCounts: for R, occ in localCounts.items(): sourceCounts[R] = (sourceCounts[R], occ) return sourceCounts
[ "Takes in two [RecordCollections](../classes/RecordCollection.html#metaknowledge.RecordCollection) and produces a `dict` counting the citations of _source_ by the [Records](../classes/Record.html#metaknowledge.Record) of _target_. By default the `dict` uses `Record` objects as keys but this can be changed with the _sourceType_ keyword to any of the WOS tags.\n\n # Parameters\n\n _source_ : `RecordCollection`\n\n > A metaknowledge `RecordCollection` containing the `Records` being cited\n\n _target_ : `RecordCollection`\n\n > A metaknowledge `RecordCollection` containing the `Records` citing those in _source_\n\n _sourceType_ : `optional [str]`\n\n > default `'raw'`, if `'raw'` the returned `dict` will contain `Records` as keys. If it is a WOS tag the keys will be of that type.\n\n _pandasFriendly_ : `optional [bool]`\n\n > default `False`, makes the output be a dict with two keys one `\"Record\"` is the list of Records ( or data type requested by _sourceType_) the other is their occurrence counts as `\"Counts\"`. The lists are the same length.\n\n _compareCounts_ : `optional [bool]`\n\n > default `False`, if `True` the diffusion analysis will be run twice, first with source and target setup like the default (global scope) then using only the source `RecordCollection` (local scope).\n\n _extraValue_ : `optional [str]`\n\n > default `None`, if a tag the returned dictionary will have `Records` mapped to maps, these maps will map the entries for the tag to counts. If _pandasFriendly_ is also `True` the resultant dictionary will have an additional column called `'year'`. This column will contain the year the citations occurred, in addition the Records entries will be duplicated for each year they occur in.\n\n > For example if `'year'` was given then the count for a single `Record` could be `{1990 : 1, 2000 : 5}`\n\n _useAllAuthors_ : `optional [bool]`\n\n > default `True`, if `False` only the first author will be used to generate the `Citations` for the _source_ `Records`\n\n # Returns\n\n `dict[:int]`\n\n > A dictionary with the type given by _sourceType_ as keys and integers as values.\n\n > If _compareCounts_ is `True` the values are tuples with the first integer being the diffusion in the target and the second the diffusion in the source.\n\n > If _pandasFriendly_ is `True` the returned dict has keys with the names of the WOS tags and lists with their values, i.e. a table with labeled columns. The counts are in the column named `\"TargetCount\"` and if _compareCounts_ the local count is in a column called `\"SourceCount\"`.\n " ]
Please provide a description of the function:def makeNodeID(Rec, ndType, extras = None): if ndType == 'raw': recID = Rec else: recID = Rec.get(ndType) if recID is None: pass elif isinstance(recID, list): recID = tuple(recID) else: recID = recID extraDict = {} if extras: for tag in extras: if tag == "raw": extraDict['Tag'] = Rec else: extraDict['Tag'] = Rec.get(tag) return recID, extraDict
[ "Helper to make a node ID, extras is currently not used" ]
Please provide a description of the function:def diffusionAddCountsFromSource(grph, source, target, nodeType = 'citations', extraType = None, diffusionLabel = 'DiffusionCount', extraKeys = None, countsDict = None, extraMapping = None): progArgs = (0, "Starting to add counts to graph") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: PBar.updateVal(0, 'Getting counts') if countsDict is None: countsDict = diffusionCount(source, target, sourceType = nodeType, extraValue = extraType, _ProgBar = PBar, extraMapping = extraMapping) try: if not isinstance(countsDict.keys().__iter__().__next__(), str): PBar.updateVal(.5, "Prepping the counts") newCountsDict = {} while True: try: k, v = countsDict.popitem() except KeyError: break newCountsDict[str(k)] = v countsDict = newCountsDict except StopIteration: pass count = 0 for n in grph.nodes_iter(): PBar.updateVal(.5 + .5 * (count / len(grph)), "Adding count for '{}'".format(n)) if extraType is not None: if extraKeys: for key in extraKeys: grph.node[n][key] = 0 grph.node[n][diffusionLabel] = 0 try: for k, v in countsDict[n].items(): if k == 'TargetCount': grph.node[n][diffusionLabel] = v else: if k: grph.node[n][k] = v except KeyError: grph.node[n][diffusionLabel] = 0 else: grph.node[n][diffusionLabel] = countsDict.get(n, 0) count += 1 PBar.finish("Done adding diffusion counts to a graph") return countsDict
[ "Does a diffusion using [diffusionCount()](#metaknowledge.diffusion.diffusionCount) and updates _grph_ with it, using the nodes in the graph as keys in the diffusion, i.e. the source. The name of the attribute the counts are added to is given by _diffusionLabel_. If the graph is not composed of citations from the source and instead is another tag _nodeType_ needs to be given the tag string.\n\n # Parameters\n\n _grph_ : `networkx Graph`\n\n > The graph to be updated\n\n _source_ : `RecordCollection`\n\n > The `RecordCollection` that created _grph_\n\n _target_ : `RecordCollection`\n\n > The `RecordCollection` that will be counted\n\n _nodeType_ : `optional [str]`\n\n > default `'citations'`, the tag that constants the values used to create _grph_\n\n # Returns\n\n `dict[:int]`\n\n > The counts dictioanry used to add values to _grph_. *Note* _grph_ is modified by the function and the return is done in case you need it.\n " ]
Please provide a description of the function:def pandoc_process(app, what, name, obj, options, lines): if not lines: return None input_format = app.config.mkdsupport_use_parser output_format = 'rst' # Since default encoding for sphinx.ext.autodoc is unicode and pypandoc.convert_text, which will always return a # unicode string, expects unicode or utf-8 encodes string, there is on need for dealing with coding text = SEP.join(lines) text = pypandoc.convert_text(text, output_format, format=input_format) # The 'lines' in Sphinx is a list of strings and the value should be changed del lines[:] lines.extend(text.split(SEP))
[ "\"Convert docstrings in Markdown into reStructureText using pandoc\n " ]
Please provide a description of the function:def beginningPage(R): p = R['PG'] if p.startswith('suppl '): p = p[6:] return p.split(' ')[0].split('-')[0].replace(';', '')
[ "As pages may not be given as numbers this is the most accurate this function can be" ]
Please provide a description of the function:def _bibFormatter(s, maxLength): if isinstance(s, list): s = ' and '.join((str(v) for v in s)) elif not isinstance(s, str): s = str(s) if len(s) > maxLength: s = s.replace('"', '') s = [s[i * maxLength: (i + 1) * maxLength] for i in range(len(s) // maxLength )] s = '"{}"'.format('" # "'.join(s)) elif '"' not in s: s = '"{}"'.format(s) else: s = s.replace('{', '\\{').replace('}', '\\}') s = '{{{}}}'.format(s) return s
[ "Formats a string, list or number to make it good for a bib file by:\n * if too long splits up the string correctly\n * tries to use the best quoting characters\n * expands lists into ' and ' seperated values, as per spec for authors field\n Note, this does not escape characters. LaTeX may have issues with the output\n Max length splitting derived from https://www.cs.arizona.edu/~collberg/Teaching/07.231/BibTeX/bibtex.html\n " ]
Please provide a description of the function:def copy(self): c = copy.copy(self) c._fieldDict = c._fieldDict.copy() return c
[ "Correctly copies the `Record`\n\n # Returns\n\n `Record`\n\n > A completely decoupled copy of the original\n " ]
Please provide a description of the function:def get(self, tag, default = None, raw = False): if raw: if tag in self._fieldDict: return self._fieldDict[tag] elif self.getAltName(tag) in self._fieldDict: return self._fieldDict[self.getAltName(tag)] else: return default else: try: return self[tag] except KeyError: return default
[ "Allows access to the raw values or is an Exception safe wrapper to `__getitem__`.\n\n # Parameters\n\n _tag_ : `str`\n\n > The requested tag\n\n _default_ : `optional [Object]`\n\n > Default `None`, the object returned when _tag_ is not found\n\n _raw_ : `optional [bool]`\n\n > Default `False`, if `True` the unprocessed value of _tag_ is returned\n\n # Returns\n\n `Object`\n\n > The processed value of _tag_ or _default_\n " ]
Please provide a description of the function:def values(self, raw = False): if raw: return self._fieldDict.values() else: return collections.abc.Mapping.values(self)
[ "Like `values` for dicts but with a `raw` option\n\n # Parameters\n\n _raw_ : `optional [bool]`\n\n > Default `False`, if `True` the `ValuesView` contains the raw values\n\n # Returns\n\n `ValuesView`\n\n > The values of the record\n " ]
Please provide a description of the function:def items(self, raw = False): if raw: return self._fieldDict.items() else: return collections.abc.Mapping.items(self)
[ "Like `items` for dicts but with a `raw` option\n\n # Parameters\n\n _raw_ : `optional [bool]`\n\n > Default `False`, if `True` the `KeysView` contains the raw values as the values\n\n # Returns\n\n `KeysView`\n\n > The key-value pairs of the record\n " ]
Please provide a description of the function:def getCitations(self, field = None, values = None, pandasFriendly = True): retCites = [] if values is not None: if isinstance(values, (str, int, float)) or not isinstance(values, collections.abc.Container): values = [values] if field is not None: for cite in self.get('citations', []): try: targetVal = getattr(cite, field) if values is None or targetVal in values: retCites.append(cite) except AttributeError: pass else: retCites = self.get('citations', []) if pandasFriendly: return _pandasPrep(retCites, False) return retCites
[ "Creates a pandas ready dict with each row a different citation and columns containing the original string, year, journal and author's name.\n\n There are also options to filter the output citations with _field_ and _values_\n\n # Parameters\n\n _field_ : `optional str`\n\n > Default `None`, if given all citations missing the named field will be dropped.\n\n _values_ : `optional str or list[str]`\n\n > Default `None`, if _field_ is also given only those citations with one of the strings given in _values_ will be included.\n\n > e.g. to get only citations from 1990 or 1991: `field = year, values = [1991, 1990]`\n\n _pandasFriendly_ : `optional bool`\n\n > Default `True`, if `False` a list of the citations will be returned instead of the more complicated pandas dict\n\n # Returns\n\n `dict`\n\n > A pandas ready dict with all the citations\n " ]
Please provide a description of the function:def subDict(self, tags, raw = False): retDict = {} for tag in tags: retDict[tag] = self.get(tag, raw = raw) return retDict
[ "Creates a dict of values of _tags_ from the Record. The tags are the keys and the values are the values. If the tag is missing the value will be `None`.\n\n # Parameters\n\n _tags_ : `list[str]`\n\n > The list of tags requested\n\n _raw_ : `optional [bool]`\n\n >default `False` if `True` the retuned values of the dict will be unprocessed\n\n # Returns\n\n `dict`\n\n > A dictionary with the keys _tags_ and the values from the record\n " ]
Please provide a description of the function:def createCitation(self, multiCite = False): #Need to put the import here to avoid circular import issues from .citation import Citation valsLst = [] if multiCite: auths = [] for auth in self.get("authorsShort", []): auths.append(auth.replace(',', '')) else: if self.get("authorsShort", False): valsLst.append(self['authorsShort'][0].replace(',', '')) if self.get("year", False): valsLst.append(str(self.get('year'))) if self.get("j9", False): valsLst.append(self.get('j9')) elif self.get("title", False): #No j9 means its probably book so using the books title/leaving blank valsLst.append(self.get('title', '')) if self.get("volume", False): valsLst.append('V' + str(self.get('volume'))) if self.get("beginningPage", False): valsLst.append('P' + str(self.get('beginningPage'))) if self.get("DOI", False): valsLst.append('DOI ' + self.get('DOI')) if multiCite and len(auths) > 0: return(tuple((Citation(', '.join([a] + valsLst)) for a in auths))) elif multiCite: return Citation(', '.join(valsLst)), else: return Citation(', '.join(valsLst))
[ "Creates a citation string, using the same format as other WOS citations, for the [Record](./Record.html#metaknowledge.Record) by reading the relevant special tags (`'year'`, `'J9'`, `'volume'`, `'beginningPage'`, `'DOI'`) and using it to create a [Citation](./Citation.html#metaknowledge.citation.Citation) object.\n\n # Parameters\n\n _multiCite_ : `optional [bool]`\n\n > Default `False`, if `True` a tuple of Citations is returned with each having a different one of the records authors as the author\n\n # Returns\n\n `Citation`\n\n > A [Citation](./Citation.html#metaknowledge.citation.Citation) object containing a citation for the Record.\n " ]
Please provide a description of the function:def authGenders(self, countsOnly = False, fractionsMode = False, _countsTuple = False): authDict = recordGenders(self) if _countsTuple or countsOnly or fractionsMode: rawList = list(authDict.values()) countsList = [] for k in ('Male','Female','Unknown'): countsList.append(rawList.count(k)) if fractionsMode: tot = sum(countsList) for i in range(3): countsList.append(countsList.pop(0) / tot) if _countsTuple: return tuple(countsList) else: return {'Male' : countsList[0], 'Female' : countsList[1], 'Unknown' : countsList[2]} else: return authDict
[ "Creates a dict mapping `'Male'`, `'Female'` and `'Unknown'` to lists of the names of all the authors.\n\n # Parameters\n\n _countsOnly_ : `optional bool`\n\n > Default `False`, if `True` the counts (lengths of the lists) will be given instead of the lists of names\n\n _fractionsMode_ : `optional bool`\n\n > Default `False`, if `True` the fraction counts (lengths of the lists divided by the total number of authors) will be given instead of the lists of names. This supersedes _countsOnly_\n\n # Returns\n\n `dict[str:str or int]`\n\n > The mapping of genders to author's names or counts\n " ]
Please provide a description of the function:def bibString(self, maxLength = 1000, WOSMode = False, restrictedOutput = False, niceID = True): keyEntries = [] if self.bad: raise BadRecord("This record cannot be converted to a bibtex entry as the input was malformed.\nThe original line number (if any) is: {} and the original file is: '{}'".format(self._sourceLine, self._sourceFile)) if niceID: if self.get('authorsFull'): bibID = self['authorsFull'][0].title().replace(' ', '').replace(',', '').replace('.','') else: bibID = '' if self.get('year', False): bibID += '-' + str(self['year']) if self.get('month', False): bibID += '-' + str(self['month']) if self.get('title', False): tiSorted = sorted(self.get('title').split(' '), key = len) bibID += '-' + tiSorted.pop().title() while len(bibID) < 35 and len(tiSorted) > 0: bibID += '-' + tiSorted.pop().title() #Title Case if len(bibID) < 30: bibID += str(self.id) elif WOSMode: bibID = 'ISI:{}'.format(self.id[4:]) else: bibID = str(self.id) keyEntries.append("author = {{{{{}}}}},".format(' and '.join(self.get('authorsFull', ['None'])))) if restrictedOutput: tagsIter = ((k, self[k]) for k in commonRecordFields if k in self) else: tagsIter = self.items() if WOSMode: for tag, value in tagsIter: if isinstance(value, list): keyEntries.append("{} = {{{{{}}}}},".format(tag,'\n '.join((str(v) for v in value)))) else: keyEntries.append("{} = {{{{{}}}}},".format(tag, value)) s = .format('misc', bibID, '\n'.join(keyEntries)) else: for tag, value in tagsIter: keyEntries.append("{} = {},".format(tag, _bibFormatter(value, maxLength))) s = .format('misc', bibID, '\n '.join(keyEntries)) return s
[ "Makes a string giving the Record as a bibTex entry. If the Record is of a journal article (`PT J`) the bibtext type is set to `'article'`, otherwise it is set to `'misc'`. The ID of the entry is the WOS number and all the Record's fields are given as entries with their long names.\n\n **Note** This is not meant to be used directly with LaTeX none of the special characters have been escaped and there are a large number of unnecessary fields provided. _niceID_ and _maxLength_ have been provided to make conversions easier.\n\n **Note** Record entries that are lists have their values seperated with the string `' and '`\n\n # Parameters\n\n _maxLength_ : `optional [int]`\n\n > default 1000, The max length for a continuous string. Most bibTex implementation only allow string to be up to 1000 characters ([source](https://www.cs.arizona.edu/~collberg/Teaching/07.231/BibTeX/bibtex.html)), this splits them up into substrings then uses the native string concatenation (the `'#'` character) to allow for longer strings\n\n _WOSMode_ : `optional [bool]`\n\n > default `False`, if `True` the data produced will be unprocessed and use double curly braces. This is the style WOS produces bib files in and mostly macthes that.\n\n _restrictedOutput_ : `optional [bool]`\n\n > default `False`, if `True` the tags output will be limited to tose found in `metaknowledge.commonRecordFields`\n\n _niceID_ : `optional [bool]`\n\n > default `True`, if `True` the ID used will be derived from the authors, publishing date and title, if `False` it will be the UT tag\n\n # Returns\n\n `str`\n\n > The bibTex string of the Record\n ", "@{0}{{ {1},\\n{2}\\n}}", "@{0}{{ {1},\\n {2}\\n}}" ]
Please provide a description of the function:def proQuestParser(proFile): #assumes the file is ProQuest nameDict = {} recSet = set() error = None lineNum = 0 try: with open(proFile, 'r', encoding = 'utf-8') as openfile: f = enumerate(openfile, start = 1) for i in range(12): lineNum, line = next(f) # f is file so it *should* end, or at least cause a parser error eventually while True: lineNum, line = next(f) lineNum, line = next(f) if line == 'Bibliography\n': for i in range(3): lineNum, line = next(f) break else: s = line.split('. ') nameDict[int(s[0])] = '. '.join(s[1:])[:-1] while True: #import pdb; pdb.set_trace() lineNum, line = next(f) if line == 'Bibliography\n': break elif line.startswith('Document '): n = int(line[9:].split(' of ')[0]) R = ProQuestRecord(f, sFile = proFile, sLine = lineNum) if R.get('Title') != nameDict[n]: error = BadProQuestFile("The numbering of the titles at the beginning of the file does not match the records inside. Line {} has a record titled '{}' with number {}, the name should be '{}'.".format(lineNum, R.get('Title', "TITLE MISSING"), n, nameDict[n])) raise StopIteration recSet.add(R) lineNum, line = next(f) else: #Parsing failed error = BadProQuestFile("The file '{}' has parts of it that are unparsable starting at line: {}. It is likely that the seperators between the records are incorrect".format(proFile, lineNum)) raise StopIteration except (UnicodeDecodeError, StopIteration, ValueError) as e: if error is None: error = BadProQuestFile("The file '{}' has parts of it that are unparsable starting at line: {}.\nThe error was: '{}'".format(proFile, lineNum, e)) return recSet, error
[ "Parses a ProQuest file, _proFile_, to extract the individual entries.\n\n A ProQuest file has three sections, first a list of the contained entries, second the full metadata and finally a bibtex formatted entry for the record. This parser only uses the first two as the bibtex contains no information the second section does not. Also, the first section is only used to verify the second section. The returned [ProQuestRecord](../classes/ProQuestRecord.html#metaknowledge.proquest.ProQuestRecord) contains the data from the second section, with the same key strings as ProQuest uses and the unlabeled sections are called in order, `'Name'`, `'Author'` and `'url'`.\n\n # Parameters\n\n _proFile_ : `str`\n\n > A path to a valid ProQuest file, use [isProQuestFile](#metaknowledge.proquest.proQuestHandlers.isProQuestFile) to verify\n\n # Returns\n\n `set[ProQuestRecord]`\n\n > Records for each of the entries\n " ]
Please provide a description of the function:def getInvestigators(self, tags = None, seperator = ";", _getTag = False): if tags is None: tags = ['Investigator'] elif isinstance(tags, str): tags = ['Investigator', tags] else: tags.append('Investigator') return super().getInvestigators(tags = tags, seperator = seperator, _getTag = _getTag)
[ "Returns a list of the names of investigators. The optional arguments are ignored.\n\n # Returns\n\n `list [str]`\n\n > A list of all the found investigator's names\n " ]
Please provide a description of the function:def medlineParser(pubFile): #assumes the file is MEDLINE recSet = set() error = None lineNum = 0 try: with open(pubFile, 'r', encoding = 'latin-1') as openfile: f = enumerate(openfile, start = 1) lineNum, line = next(f) try: while True: if line.startswith("PMID- "): try: r = MedlineRecord(itertools.chain([(lineNum, line)], f), sFile = pubFile, sLine = lineNum) recSet.add(r) except BadPubmedFile as e: badLine = lineNum try: lineNum, line = next(f) while not line.startswith("PMID- "): lineNum, line = next(f) except (StopIteration, UnicodeDecodeError) as e: if error is None: error = BadPubmedFile("The file '{}' becomes unparsable after line: {}, due to the error: {} ".format(pubFile, badLine, e)) raise e elif line != '\n': if error is None: error = BadPubmedFile("The file '{}' has parts of it that are unparsable starting at line: {}.".format(pubFile, lineNum)) lineNum, line = next(f) except StopIteration: #End of the file has been reached pass except UnicodeDecodeError: if error is None: error = BadPubmedFile("The file '{}' has parts of it that are unparsable starting at line: {}.".format(pubFile, lineNum)) return recSet, error
[ "Parses a medline file, _pubFile_, to extract the individual entries as [MedlineRecords](#metaknowledge.medline.recordMedline.MedlineRecord).\n\n A medline file is a series of entries, each entry is a series of tags. A tag is a 2 to 4 character string each tag is padded with spaces on the left to make it 4 characters which is followed by a dash and a space (`'- '`). Everything after the tag and on all lines after it not starting with a tag is considered associated with the tag. Each entry's first tag is `PMID`, so a first line looks something like `PMID- 26524502`. Entries end with a single blank line.\n\n # Parameters\n\n _pubFile_ : `str`\n\n > A path to a valid medline file, use [isMedlineFile](#metaknowledge.medline.medlineHandlers.isMedlineFile) to verify\n\n # Returns\n\n `set[MedlineRecord]`\n\n > Records for each of the entries\n " ]
Please provide a description of the function:def nameStringGender(s, noExcept = False): global mappingDict try: first = s.split(', ')[1].split(' ')[0].title() except IndexError: if noExcept: return 'Unknown' else: return GenderException("The given String: '{}' does not have a last name, first name pair in with a ', ' seperation.".format(s)) if mappingDict is None: mappingDict = getMapping() return mappingDict.get(first, 'Unknown')
[ "Expects `first, last`" ]
Please provide a description of the function:def j9urlGenerator(nameDict = False): start = "https://images.webofknowledge.com/images/help/WOS/" end = "_abrvjt.html" if nameDict: urls = {"0-9" : start + "0-9" + end} for c in string.ascii_uppercase: urls[c] = start + c + end else: urls = [start + "0-9" + end] for c in string.ascii_uppercase: urls.append(start + c + end) return urls
[ "How to get all the urls for the WOS Journal Title Abbreviations. Each is varies by only a few characters. These are the currently in use urls they may change.\n\n They are of the form:\n\n > \"https://images.webofknowledge.com/images/help/WOS/{VAL}_abrvjt.html\"\n > Where {VAL} is a capital letter or the string \"0-9\"\n\n # Returns\n\n `list[str]`\n\n > A list of all the url's strings\n " ]
Please provide a description of the function:def _j9SaveCurrent(sDir = '.'): dname = os.path.normpath(sDir + '/' + datetime.datetime.now().strftime("%Y-%m-%d_J9_AbbreviationDocs")) if not os.path.isdir(dname): os.mkdir(dname) os.chdir(dname) else: os.chdir(dname) for urlID, urlString in j9urlGenerator(nameDict = True).items(): fname = "{}_abrvjt.html".format(urlID) f = open(fname, 'wb') f.write(urllib.request.urlopen(urlString).read())
[ "Downloads and saves all the webpages\n\n For Backend\n " ]
Please provide a description of the function:def _getDict(j9Page): slines = j9Page.read().decode('utf-8').split('\n') while slines.pop(0) != "<DL>": pass currentName = slines.pop(0).split('"></A><DT>')[1] currentTag = slines.pop(0).split("<B><DD>\t")[1] j9Dict = {} while True: try: j9Dict[currentTag].append(currentName) except KeyError: j9Dict[currentTag] = [currentName] try: currentName = slines.pop(0).split('</B><DT>')[1] currentTag = slines.pop(0).split("<B><DD>\t")[1] except IndexError: break return j9Dict
[ "Parses a Journal Title Abbreviations page\n\n Note the pages are not well formatted html as the <DT> tags are not closes so html parses (Beautiful Soup) do not work. This is a simple parser that only works on the webpages and may fail if they are changed\n\n For Backend\n " ]
Please provide a description of the function:def _getCurrentj9Dict(): urls = j9urlGenerator() j9Dict = {} for url in urls: d = _getDict(urllib.request.urlopen(url)) if len(d) == 0: raise RuntimeError("Parsing failed, this is could require an update of the parser.") j9Dict.update(d) return j9Dict
[ "Downloads and parses all the webpages\n\n For Backend\n " ]
Please provide a description of the function:def updatej9DB(dbname = abrevDBname, saveRawHTML = False): if saveRawHTML: rawDir = '{}/j9Raws'.format(os.path.dirname(__file__)) if not os.path.isdir(rawDir): os.mkdir(rawDir) _j9SaveCurrent(sDir = rawDir) dbLoc = os.path.join(os.path.normpath(os.path.dirname(__file__)), dbname) try: with dbm.dumb.open(dbLoc, flag = 'c') as db: try: j9Dict = _getCurrentj9Dict() except urllib.error.URLError: raise urllib.error.URLError("Unable to access server, check your connection") for k, v in j9Dict.items(): if k in db: for jName in v: if jName not in j9Dict[k]: j9Dict[k] += '|' + jName else: db[k] = '|'.join(v) except dbm.dumb.error as e: raise JournalDataBaseError("Something happened with the database of WOS journal names. To fix this you should delete the 1 to 3 files whose names start with {}. If this doesn't work (sorry), deleteing everything in '{}' and reinstalling metaknowledge should.\nThe error was '{}'".format(dbLoc, os.path.dirname(__file__), e))
[ "Updates the database of Journal Title Abbreviations. Requires an internet connection. The data base is saved relative to the source file not the working directory.\n\n # Parameters\n\n _dbname_ : `optional [str]`\n\n > The name of the database file, default is \"j9Abbreviations.db\"\n\n _saveRawHTML_ : `optional [bool]`\n\n > Determines if the original HTML of the pages is stored, default `False`. If `True` they are saved in a directory inside j9Raws begining with todays date.\n " ]
Please provide a description of the function:def getj9dict(dbname = abrevDBname, manualDB = manualDBname, returnDict ='both'): dbLoc = os.path.normpath(os.path.dirname(__file__)) retDict = {} try: if returnDict == 'both' or returnDict == 'WOS': with dbm.dumb.open(dbLoc + '/{}'.format(dbname)) as db: if len(db) == 0: raise JournalDataBaseError("J9 Database empty or missing, to regenerate it import and run metaknowledge.WOS.journalAbbreviations.updatej9DB().") for k, v in db.items(): retDict[k.decode('utf-8')] = v.decode('utf-8').split('|') except JournalDataBaseError: updatej9DB() return getj9dict(dbname = dbname, manualDB = manualDB, returnDict = returnDict) try: if returnDict == 'both' or returnDict == 'manual': if os.path.isfile(dbLoc + '/{}.dat'.format(manualDB)): with dbm.dumb.open(dbLoc + '/{}'.format(manualDB)) as db: for k, v in db.items(): retDict[k.decode('utf-8')] = v.decode('utf-8').split('|') else: if returnDict == 'manual': raise JournalDataBaseError("Manual J9 Database ({0}) missing, to create it run addToDB(dbname = {0})".format(manualDB)) except JournalDataBaseError: updatej9DB(dbname = manualDB) return getj9dict(dbname = dbname, manualDB = manualDB, returnDict = returnDict) return retDict
[ "Returns the dictionary of journal abbreviations mapping to a list of the associated journal names. By default the local database is used. The database is in the file _dbname_ in the same directory as this source file\n\n # Parameters\n\n _dbname_ : `optional [str]`\n\n > The name of the downloaded database file, the default is determined at run time. It is recommended that this remain untouched.\n\n _manualDB_ : `optional [str]`\n\n > The name of the manually created database file, the default is determined at run time. It is recommended that this remain untouched.\n\n _returnDict_ : `optional [str]`\n\n > default `'both'`, can be used to get both databases or only one with `'WOS'` or `'manual'`.\n " ]
Please provide a description of the function:def addToDB(abbr = None, dbname = manualDBname): dbLoc = os.path.normpath(os.path.dirname(__file__)) with dbm.dumb.open(dbLoc + '/' + dbname) as db: if isinstance(abbr, str): db[abbr] = abbr elif isinstance(abbr, dict): try: db.update(abbr) except TypeError: raise TypeError("The keys and values of abbr must be strings.") elif abbr is None: pass else: raise TypeError("abbr must be a str or dict.")
[ "Adds _abbr_ to the database of journals. The database is kept separate from the one scraped from WOS, this supersedes it. The database by default is stored with the WOS one and the name is given by `metaknowledge.journalAbbreviations.manualDBname`. To create an empty database run **addToDB** without an _abbr_ argument.\n\n # Parameters\n\n _abbr_ : `optional [str or dict[str : str]]`\n\n > The journal abbreviation to be added to the database, it can either be a single string in which case that string will be added with its self as the full name, or a dict can be given with the abbreviations as keys and their names as strings, use pipes (`'|'`) to separate multiple names. Note, if the empty string is given as a name the abbreviation will be considered manually __excluded__, i.e. having excludeFromDB() run on it.\n\n _dbname_ : `optional [str]`\n\n > The name of the database file, default is `metaknowledge.journalAbbreviations.manualDBname`.\n " ]
Please provide a description of the function:def excludeFromDB(abbr = None, dbname = manualDBname): dbLoc = os.path.normpath(os.path.dirname(__file__)) with dbm.dumb.open(dbLoc + '/' + dbname) as db: if isinstance(abbr, str): db[abbr] = '' elif isinstance(abbr, list) or isinstance(abbr, tuple): try: db.update({k : '' for k in abbr}) except TypeError: raise TypeError("The keys and values of abbr must be strings.") elif abbr is None: pass else: raise TypeError("abbr must be a str, list or tuple.")
[ "Marks _abbr_ to be excluded the database of journals. The database is kept separate from the one scraped from WOS, this supersedes it. The database by default is stored with the WOS one and the name is given by `metaknowledge.journalAbbreviations.manualDBname`. To create an empty database run [addToDB()](#metaknowledge.journalAbbreviations.backend.addToDB) without an _abbr_ argument.\n\n # Parameters\n\n _abbr_ : `optional [str or tuple[str] or list[str]`\n\n > The journal abbreviation to be excluded from the database, it can either be a single string in which case that string will be exclude or a list/tuple of strings can be given with the abbreviations.\n\n _dbname_ : `optional [str]`\n\n > The name of the database file, default is `metaknowledge.journalAbbreviations.manualDBname`.\n " ]
Please provide a description of the function:def normalizeToTag(val): try: val = val.upper() except AttributeError: raise KeyError("{} is not a tag or name string".format(val)) if val not in tagsAndNameSetUpper: raise KeyError("{} is not a tag or name string".format(val)) else: try: return fullToTagDictUpper[val] except KeyError: return val
[ "Converts tags or full names to 2 character tags, case insensitive\n\n # Parameters\n\n _val_: `str`\n\n > A two character string giving the tag or its full name\n\n # Returns\n\n `str`\n\n > The short name of _val_\n " ]
Please provide a description of the function:def normalizeToName(val): if val not in tagsAndNameSet: raise KeyError("{} is not a tag or name string".format(val)) else: try: return tagToFullDict[val] except KeyError: return val
[ "Converts tags or full names to full names, case sensitive\n\n # Parameters\n\n _val_: `str`\n\n > A two character string giving the tag or its full name\n\n # Returns\n\n `str`\n\n > The full name of _val_\n " ]
Please provide a description of the function:def authAddress(val): ret = [] for a in val: if a[0] == '[': ret.append('] '.join(a.split('] ')[1:])) else: ret.append(a) return ret
[ "\n # The C1 Tag\n\n extracts the address of the authors as given by WOS. **Warning** the mapping of author to address is not very good and is given in multiple ways.\n\n # Parameters\n\n _val_: `list[str]`\n\n > The raw data from a WOS file\n\n # Returns\n\n `list[str]`\n\n > A list of addresses\n\n " ]
Please provide a description of the function:def citations(val): retCites = [] for c in val: retCites.append(Citation(c)) return retCites
[ "\n # The CR Tag\n\n extracts a list of all the citations in the record, the citations are the [metaknowledge.Citation](../classes/Citation.html#metaknowledge.citation.Citation) class.\n\n # Parameters\n\n _val_: `list[str]`\n\n > The raw data from a WOS file\n\n # Returns\n\n ` list[metaknowledge.Citation]`\n\n > A list of Citations\n\n " ]
Please provide a description of the function:def getInvestigators(self, tags = None, seperator = ";", _getTag = False): #By default we don't know which field has the investigators investVal = [] retTag = None if tags is not None: if not isinstance(tags, list): tags = [tags] for tag in tags: try: tval = self[tag].split(seperator) if _getTag: investVal += [(t.strip(), tag) for t in tval] else: investVal += [t.strip() for t in tval] except KeyError: pass except AttributeError: tval = [auth.split(seperator)[0] for auth in self[tag]] if _getTag: investVal += [(t.strip(), tag) for t in tval] else: investVal += [t.strip() for t in tval] return investVal
[ "Returns a list of the names of investigators. This is done by looking (in order) for any of fields in _tags_ and splitting the strings on _seperator_. If no strings are found an empty list will be returned.\n\n *Note* for some Grants `getInvestigators` has been overwritten and will ignore the arguments and simply provide the investigators.\n\n # Parameters\n\n _tags_ : `optional list[str]`\n\n > A list of the tags to look for investigators in\n\n _seperator_ : `optional str`\n\n > The string that separators each investigators name within the column\n\n # Returns\n\n `list [str]`\n\n > A list of all the found investigator's names\n " ]
Please provide a description of the function:def getInstitutions(self, tags = None, seperator = ";", _getTag = False): return self.getInvestigators(tags = tags, seperator = seperator, _getTag = _getTag)
[ "Returns a list of the names of institutions. This is done by looking (in order) for any of fields in _tags_ and splitting the strings on _seperator_ (in case of multiple institutions). If no strings are found an empty list will be returned.\n\n *Note* for some Grants `getInstitutions` has been overwritten and will ignore the arguments and simply provide the investigators.\n\n # Parameters\n\n _tags_ : `optional list[str]`\n\n > A list of the tags to look for institutions in\n\n _seperator_ : `optional str`\n\n > The string that separators each institutions name within the column\n\n # Returns\n\n `list [str]`\n\n > A list of all the found institution's names\n " ]
Please provide a description of the function:def update(self, other): if type(self) != type(other): return NotImplemented else: if other.bad: self.error = other.error self.bad = True self._fieldDict.update(other._fieldDict)
[ "Adds all the tag-entry pairs from _other_ to the `Grant`. If there is a conflict _other_ takes precedence.\n\n # Parameters\n\n _other_ : `Grant`\n\n > Another `Grant` of the same type as _self_\n " ]
Please provide a description of the function:def networkCoInvestigatorInstitution(self, targetTags = None, tagSeperator = ';', count = True, weighted = True): return self.networkCoInvestigator(targetTags = targetTags, tagSeperator = tagSeperator, count = count, weighted = weighted, _institutionLevel = True)
[ "This works the same as [networkCoInvestigator()](#metaknowledge.GrantCollection.networkCoInvestigator) see it for details." ]
Please provide a description of the function:def networkCoInvestigator(self, targetTags = None, tagSeperator = ';', count = True, weighted = True, _institutionLevel = False): grph = nx.Graph() pcount = 0 if _institutionLevel: progArgs = (0, "Starting to make a co-institution network") else: progArgs = (0, "Starting to make a co-investigator network") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: for G in self: if PBar: pcount += 1 PBar.updateVal(pcount/ len(self), "Analyzing: " + str(G)) if _institutionLevel: investList = G.getInstitutions(tags = targetTags, seperator = tagSeperator, _getTag = True) else: investList = G.getInvestigators(tags = targetTags, seperator = tagSeperator, _getTag = True) if len(investList) > 1: for i, invest1 in enumerate(investList): if invest1[0] not in grph: if count: grph.add_node(invest1[0], count = 1, field = invest1[1]) else: grph.add_node(invest1[0], field = invest1[1]) elif count: grph.node[invest1[0]]['count'] += 1 for invest2 in investList[i + 1:]: if invest2[0] not in grph: if count: grph.add_node(invest2[0], count = 1, field = invest2[1]) else: grph.add_node(invest2[0], field = invest2[1]) elif count: grph.node[invest2[0]]['count'] += 1 if grph.has_edge(invest1[0], invest2[0]) and weighted: grph.edges[invest1[0], invest2[0]]['weight'] += 1 elif weighted: grph.add_edge(invest1[0], invest2[0], weight = 1) else: grph.add_edge(invest1[0], invest2[0]) elif len(investList) > 0: invest1 = investList[0] if invest1[0] not in grph: if count: grph.add_node(invest1[0], count = 1, field = invest1[1]) else: grph.add_node(invest1[0], field = invest1[1]) elif count: grph.node[invest1[0]]['count'] += 1 if _institutionLevel: PBar.finish("Done making a co-institution network from {}".format(self)) else: PBar.finish("Done making a co-investigator network from {}".format(self)) return grph
[ "Creates a co-investigator from the collection\n\n Most grants do not have a known investigator tag so it must be provided by the user in _targetTags_ and the separator character if it is not a semicolon should also be given.\n\n # Parameters\n\n > _targetTags_ : `optional list[str]`\n\n > A list of all the Grant tags to check for investigators\n\n _tagSeperator_ : `optional str`\n\n > The character that separates the individual investigator's names\n\n _count_ : `optional bool`\n\n > Default `True`, if `True` the number of time a name occurs will be given\n\n _weighted_ : `optional bool`\n\n > Default `True`, if `True` the edge weights will be calculated and added to the edges\n\n # Returns\n\n `networkx Graph`\n\n > The graph of co-investigator\n " ]
Please provide a description of the function:def proQuestTagToFunc(tag): if tag in singleLineEntries: return lambda x : x[0] elif tag in customTags: return customTags[tag] else: return lambda x : x
[ "Takes a tag string, _tag_, and returns the processing function for its data. If their is not a predefined function returns the identity function (`lambda x : x`).\n\n # Parameters\n\n _tag_ : `str`\n\n > The requested tag\n\n # Returns\n\n `function`\n\n > A function to process the tag's data\n " ]
Please provide a description of the function:def scopusRecordParser(record, header = None): if header is None: header = scopusHeader splitRecord = record[:-1].split(',') tagDict = {} quoted = False for key in reversed(header): currentVal = splitRecord.pop() if currentVal == '': pass elif currentVal[-1] == '"': if re.match(firstQuotingRegex, currentVal) is None: valString = ',' + currentVal[:-1] currentVal = splitRecord.pop() #double quotes (") are escaped by proceeding them with another double quote #So an entry containing: #',"stuff,""quoted"",more stuff,""more quoted""",' #would be a single string belonging to 1 column that looks like: #'stuff,"quoted",more stuff,"more quoted"' #We are not going to unescape the quotation marks but we do have to deal with them while re.match(innerQuotingRegex, currentVal) is None: valString = ',' + currentVal + valString currentVal = splitRecord.pop() valString = currentVal[1:] + valString else: try: valString = currentVal[1:-1] except ValueError: valString = currentVal[1:-1] tagDict[key] = valString else: tagDict[key] = currentVal return tagDict
[ "The parser [ScopusRecords](../classes/ScopusRecord.html#metaknowledge.scopus.ScopusRecord) use. This takes a line from [scopusParser()](#metaknowledge.scopus.scopusHandlers.scopusParser) and parses it as a part of the creation of a `ScopusRecord`.\n\n **Note** this is for csv files downloaded from scopus _not_ the text records as those are less complete. Also, Scopus uses double quotes (`\"`) to quote strings, such as abstracts, in the csv so double quotes in the string must be escaped. For reasons not fully understandable by mortals they choose to use two double quotes in a row (`\"\"`) to represent an escaped double quote. This parser does not unescape these quotes, but it does correctly handle their interacts with the outer double quotes.\n\n # Parameters\n\n _record_ : `str`\n\n > string ending with a newline containing the record's entry\n\n # Returns\n\n `dict`\n\n > A dictionary of the key-vaue pairs in the entry\n " ]
Please provide a description of the function:def createCitation(self, multiCite = False): #Need to put the import here to avoid circular import issues from ..citation import Citation valsStr = '' if multiCite: auths = [] for auth in self.get("authorsShort", []): auths.append(auth.replace(',', '')) else: if self.get("authorsShort", False): valsStr += self['authorsShort'][0].replace(',', '') + ', ' if self.get("title", False): valsStr += self.get('title').replace('(', '').replace(')', '') + ' ' if self.get("year", False): valsStr += "({}) ".format(self.get('year')) if self.get("journal", False): valsStr += self.get('journal') + ', ' if self.get("volume", False): valsStr += str(self.get('volume')) + ', ' if self.get("beginningPage", False): valsStr += 'PP. ' + str(self.get('beginningPage')) if multiCite and len(auths) > 0: ret = (tuple((Citation(a + valsStr, scopusMode = True) for a in auths))) elif multiCite: ret = Citation(valsStr, scopusMode = True), else: ret = Citation(valsStr, scopusMode = True) if multiCite: rL = [] for c in ret: if c.bad: c.year = self.get('year', 0) c.name = self.get('title', '').upper() c.journal = self.get("journal", '').upper() rL.append(c) return tuple(rL) else: if ret.bad: ret.year = self.get('year', 0) ret.name = self.get('title', '').upper() ret.journal = self.get("journal", '').upper() return ret
[ "Overwriting the general [citation creator](./ExtendedRecord.html#metaknowledge.ExtendedRecord.createCitation) to deal with scopus weirdness.\n\n Creates a citation string, using the same format as other WOS citations, for the [Record](./Record.html#metaknowledge.Record) by reading the relevant special tags (`'year'`, `'J9'`, `'volume'`, `'beginningPage'`, `'DOI'`) and using it to create a [Citation](./Citation.html#metaknowledge.citation.Citation) object.\n\n # Parameters\n\n _multiCite_ : `optional [bool]`\n\n > Default `False`, if `True` a tuple of Citations is returned with each having a different one of the records authors as the author\n\n # Returns\n\n `Citation`\n\n > A [Citation](./Citation.html#metaknowledge.citation.Citation) object containing a citation for the Record.\n " ]
Please provide a description of the function:def isWOSFile(infile, checkedLines = 3): try: with open(infile, 'r', encoding='utf-8-sig') as openfile: f = enumerate(openfile, start = 0) for i in range(checkedLines): if "VR 1.0" in f.__next__()[1]: return True except (StopIteration, UnicodeDecodeError): return False else: return False
[ "Determines if _infile_ is the path to a WOS file. A file is considerd to be a WOS file if it has the correct encoding (`utf-8` with a BOM) and within the first _checkedLines_ a line starts with `\"VR 1.0\"`.\n\n # Parameters\n\n _infile_ : `str`\n\n > The path to the targets file\n\n _checkedLines_ : `optional [int]`\n\n > default 2, the number of lines to check for the header\n\n # Returns\n\n `bool`\n\n > `True` if the file is a WOS file\n " ]