File size: 5,011 Bytes
ab89897
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
import os
import pandas as pd
import numpy as np
from datetime import timedelta
from binance.client import Client
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
import ta

# Initialize Binance client (insert API keys if needed)
client = Client()

# Settings
interval = Client.KLINE_INTERVAL_4HOUR

# Retrieve all trading symbols quoted in USDT
exchange_info = client.get_exchange_info()
symbols = [s['symbol'] for s in exchange_info['symbols'] 
           if s['status'] == 'TRADING' and s['quoteAsset'] == 'USDT']

# Function to process a single symbol
def process_symbol(symbol):
    data_file = f"{symbol}_data_4h_full.csv"
    # Load or download data
    if os.path.exists(data_file):
        df = pd.read_csv(data_file, index_col=0, parse_dates=True)
        last_ts = df.index[-1]
        start_time = last_ts + timedelta(hours=4)
        start_str = start_time.strftime("%d %B %Y %H:%M:%S")
        new_klines = client.get_historical_klines(symbol, interval, start_str)
        if new_klines:
            new_df = pd.DataFrame(new_klines, columns=[
                'timestamp','open','high','low','close','volume',
                'close_time','quote_av','trades','tb_base_av','tb_quote_av','ignore'
            ])
            new_df = new_df[['timestamp','open','high','low','close','volume']]
            new_df[['open','high','low','close','volume']] = new_df[['open','high','low','close','volume']].astype(float)
            new_df['timestamp'] = pd.to_datetime(new_df['timestamp'], unit='ms')
            new_df.set_index('timestamp', inplace=True)
            df = pd.concat([df, new_df])
            df = df[~df.index.duplicated(keep='first')]
            df.to_csv(data_file)
    else:
        klines = client.get_historical_klines(symbol, interval, "01 December 2021")
        df = pd.DataFrame(klines, columns=[
            'timestamp','open','high','low','close','volume',
            'close_time','quote_av','trades','tb_base_av','tb_quote_av','ignore'
        ])
        df = df[['timestamp','open','high','low','close','volume']]
        df[['open','high','low','close','volume']] = df[['open','high','low','close','volume']].astype(float)
        df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms')
        df.set_index('timestamp', inplace=True)
        df.to_csv(data_file)

    # Feature Engineering
    df['rsi'] = ta.momentum.RSIIndicator(df['close'], window=14).rsi()
    df['macd'] = ta.trend.MACD(df['close']).macd()
    for span in [10, 20, 50, 100]:
        df[f'ema_{span}'] = df['close'].ewm(span=span, adjust=False).mean()
    for window in [10, 20, 50, 100]:
        df[f'sma_{window}'] = df['close'].rolling(window=window).mean()
    bb = ta.volatility.BollingerBands(df['close'], window=20, window_dev=2)
    df['bb_width'] = (bb.bollinger_hband() - bb.bollinger_lband()) / bb.bollinger_mavg()
    df['atr'] = ta.volatility.AverageTrueRange(df['high'], df['low'], df['close'], window=14).average_true_range()
    df['adx'] = ta.trend.ADXIndicator(df['high'], df['low'], df['close'], window=14).adx()
    stoch = ta.momentum.StochasticOscillator(df['high'], df['low'], df['close'], window=14)
    df['stoch_k'] = stoch.stoch()
    df['stoch_d'] = stoch.stoch_signal()
    df['williams_r'] = ta.momentum.WilliamsRIndicator(df['high'], df['low'], df['close'], lbp=14).williams_r()
    df['cci'] = ta.trend.CCIIndicator(df['high'], df['low'], df['close'], window=20).cci()
    df['momentum'] = df['close'] - df['close'].shift(10)
    ichi = ta.trend.IchimokuIndicator(df['high'], df['low'], window1=9, window2=26, window3=52)
    df['ichimoku_senkou_span_a'] = ichi.ichimoku_a()
    df['ichimoku_senkou_span_b'] = ichi.ichimoku_b()

    # Trend Label
    conditions = [
        (df['close'] > df['ichimoku_senkou_span_a']) & (df['close'] > df['ichimoku_senkou_span_b']),
        (df['close'] < df['ichimoku_senkou_span_a']) & (df['close'] < df['ichimoku_senkou_span_b'])
    ]
    df['cloud_trend'] = np.select(conditions, [1, 0], default=-1)
    df.dropna(inplace=True)

    # Model Training
    features = df.drop(columns=['open','high','low','close','volume','cloud_trend']).columns
    X, y = df[features], df['cloud_trend']
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
    model = RandomForestClassifier(n_estimators=200, class_weight='balanced', random_state=42)
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)

    print(f"\n=== {symbol} ===")
    print(classification_report(y_test, y_pred, zero_division=0))

    # Latest prediction
    latest_feat = X.iloc[-1].values.reshape(1, -1)
    pred = model.predict(latest_feat)[0]
    labels = {1: 'Uptrend', 0: 'Downtrend', -1: 'Neutral'}
    print(f"Predicted next trend for {symbol}: {labels[pred]}")

# Main loop
for s in symbols:
    try:
        process_symbol(s)
    except Exception as e:
        print(f"Error processing {s}: {e}")