NER-Chinese  by EOA-AILab

Chinese NER model comparison research paper

created 6 years ago
334 stars

Top 83.3% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a comparative analysis of Chinese Named Entity Recognition (NER) models, specifically focusing on the NeuroNER (BiLSTM-CRF) and BertNER (Bert-BiLSTM-CRF) architectures. It is intended for researchers and practitioners in Natural Language Processing (NLP) seeking to understand and evaluate state-of-the-art approaches for Chinese NER.

How It Works

The project compares two primary architectures for Chinese NER. The first, a BiLSTM-CRF model, utilizes an embedding layer (word and character embeddings, plus additional features), followed by a bidirectional LSTM layer, and a Conditional Random Field (CRF) layer for sequence prediction. This represents a mainstream approach in NER. The second architecture, Bert-BiLSTM-CRF, replaces the traditional word embedding layer with a fine-tuned BERT model, leveraging its powerful contextual representations while retaining the BiLSTM and CRF layers for sequence labeling.

Highlighted Details

  • Comparison of NeuroNER (BiLSTM-CRF) and BertNER (Bert-BiLSTM-CRF) for Chinese NER.
  • Explores the effectiveness of BERT fine-tuning for Chinese NER tasks.

Maintenance & Community

No information on maintenance or community channels is available in the provided README.

Licensing & Compatibility

The license is not specified in the provided README.

Limitations & Caveats

The README does not specify installation instructions, dependencies, or performance benchmarks. The project appears to be a comparative study rather than a runnable toolkit.

Health Check
Last commit

6 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.