Chinese NER model comparison research paper
Top 83.3% on sourcepulse
This repository provides a comparative analysis of Chinese Named Entity Recognition (NER) models, specifically focusing on the NeuroNER (BiLSTM-CRF) and BertNER (Bert-BiLSTM-CRF) architectures. It is intended for researchers and practitioners in Natural Language Processing (NLP) seeking to understand and evaluate state-of-the-art approaches for Chinese NER.
How It Works
The project compares two primary architectures for Chinese NER. The first, a BiLSTM-CRF model, utilizes an embedding layer (word and character embeddings, plus additional features), followed by a bidirectional LSTM layer, and a Conditional Random Field (CRF) layer for sequence prediction. This represents a mainstream approach in NER. The second architecture, Bert-BiLSTM-CRF, replaces the traditional word embedding layer with a fine-tuned BERT model, leveraging its powerful contextual representations while retaining the BiLSTM and CRF layers for sequence labeling.
Highlighted Details
Maintenance & Community
No information on maintenance or community channels is available in the provided README.
Licensing & Compatibility
The license is not specified in the provided README.
Limitations & Caveats
The README does not specify installation instructions, dependencies, or performance benchmarks. The project appears to be a comparative study rather than a runnable toolkit.
6 years ago
Inactive