Model Card for Model ID
A merged LLaMA 3.1 8B checkpoint specialized for numeric price prediction from product text. This model was created by merging LLaMA 3.1 8B with a LoRA adapter (Pricer LoRA v1) and is intended to serve as a base model for further LoRA fine-tuning.
Model Details
Model Description
pricer-merged-model-A-v1 is a transformer-based causal language model optimized for estimating approximate consumer product prices from textual metadata such as title, description, and category. It represents a merged checkpoint (base model + LoRA), not an adapter-only model.
- Developed by: MyungHwan Hong (MightyOctopus)
- Funded by: Self-funded / independent research
- Shared by: MyungHwan Hong
- Model type: Causal Language Model (Text-to-Number / Numeric Prediction)
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: meta-llama/Llama-3.1-8B
Model Sources [optional]
- Repository: https://huggingface.co/MightyOctopus/pricer-merged-model-A-v1
- Paper: N/A
- Demo: N/A
Uses
Direct Use
Base checkpoint for price-prediction inference
Base model for further LoRA fine-tuning
Research on LLM-based numeric regression
Downstream Use [optional]
Domain-specific pricing models
Comparative studies vs classical ML regressors
Educational experiments on LoRA merging strategies
Out-of-Scope Use
Real-time or production pricing systems
Financial decision-making
Legal, medical, or safety-critical applications
Use as an authoritative price source
Bias, Risks, and Limitations
Predictions are approximate, not exact
Performance depends on similarity to training data distribution
May hallucinate prices for unfamiliar products
Reflects historical and dataset-specific price biases
Not robust to rapid market price changes
Recommendations
Treat outputs as estimates, not ground truth
Validate predictions with real pricing data
Avoid high-stakes or commercial deployment
Be aware of temporal and dataset bias
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "MightyOctopus/pricer-merged-model-A-v1"
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B")
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
prompt = """Product:
Title: Stainless Steel Electric Kettle 1.7L
Category: Home & Kitchen
Description: Fast boiling electric kettle with auto shut-off.
Price is $"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(inputs.input_ids, max_new_tokens=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
Amazon product metadata
Fields: title, description, category, ground-truth price
Prices represented as structured text outputs
Training Procedure
This model was created by merging a LoRA adapter (Pricer LoRA v1) into LLaMA 3.1 8B. No additional training was performed after merging.
Training Hyperparameters
- Training regime: bfloat16 mixed precision (inherited from LoRA training)
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
This merged checkpoint was not evaluated independently. Evaluation is performed on downstream fine-tuned adapters (e.g., pricer-lora-ft-v3).
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: NVIDIA GPU (e.g. T4, A100)
- Hours used: ~20 hours (LoRA v1 training)
- Cloud Provider: Google Colab / Hugging Face
- Compute Region: Unknown
- Carbon Emitted: Not estimated
Technical Specifications [optional]
Model Architecture and Objective
Transformer-based causal language model
Objective: Next-token prediction optimized for numeric output consistency
Compute Infrastructure
[More Information Needed]
Hardware
NVIDIA GPU (T4, L4, A100-class)
Software
Transformers
PEFT
PyTorch
Citation [optional]
BibTeX:
@misc{hong2025pricermerged, author = {MyungHwan Hong}, title = {Pricer Merged LLaMA 3.1 8B Model}, year = {2025}, url = {https://huggingface.co/MightyOctopus/pricer-merged-model-A-v1} }
APA:
MyungHwan Hong. (2025). Pricer Merged LLaMA 3.1 8B Model. Hugging Face.
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
MyungHwan Hong
Model Card Contact
Hugging Face: MightyOctopus
- Downloads last month
- 122