Vote for us here:

]]>Asset returns based on low frequency prices (e.g. end-of-day quotes) are still dominating modern portfolio analysis. To make portfolio metrics more relevant intraday and improve the precision of estimates, new data frequency needs to be explored.

In this presentation we demonstrate how using high frequency market data for portfolio risk management and optimization could improve the classic variance-bias trade-off and bring new insights to strategy backtesting.

Since high frequency prices require special handling, we discuss key components of an automatic model pipeline for microstructure noise, price jumps, outliers, fat tails and long-memory.

We conclude our presentation with an introduction to high frequency portfolio optimization built on top of intraday portfolio metrics. Examples will be shown in Python.

PortfolioEffect service offers portfolio optimization, portfolio backtesting, metrics forecasting and intraday risk metrics through 4 APIs: Python, R, Matlab and Java. The uniqueness of our service is that all calculation are done using high frequency market data which benefits low and high frequency traders. We cover 8,000+ US Equities (stocks, indices, ETFs). Clients can also upload their own market data. PortfolioEffect service employs latest advances in high frequency market microstructure theory to make classic portfolio risk and optimization results available intraday at tick-level resolution. It uses automated model pipeline to process high frequency price returns in a streaming fashion.

This webinar will be very beneficial for those who need intraday risk metrics at any frequency, portfolio optimization, portfolio backtesting and metrics forecasting. Example will be shown in Python. The session will be ideal for:

- Researchers
- Quant Analysts
- Traders on Equities, ETF and Indices
- Those who are looking for backtesting strategies
- Python coders interested in financial markets

To register click here.

]]>day risk metrics. This involves a new methodology for the calculation of risk that was

developed through 5 years of research. The results benefits low and high frequency

traders and researchers. We offer risk metrics data on volatility and risk factors for

8,000+ financial instruments, including stocks, stock indices and ETFs. Our end of day

risk metrics data are available on Quandl: Vol & Risk Factors, Risk & Performance Metrics.

To illustrate the potential utility of our data, we have built a sample algorithm that uses

it. Using the end of day Sharpe ratio calculated by PortfolioEffect, we compare the end

of day Sharpe ratio for the last day to the Sharpe ratio for a 1 week window length on 10

stocks: ‘IBM’,’GOOG’,’C’,’F’,’GM’,’GE’,’AAPL’,’AMZN’,’CSCO’,’GS’ since 01/04/2013.

By 1 week window length, it means the window length for calculating the metric is 1

week. Weekly Sharpe ratio is calculated on 5 days windows length. Therefore, we look

at the Sharpe ratio of the daily vs weekly rate. If the daily Sharpe ratio is greater than

the weekly Sharpe ratio, we take a long position, otherwise a short one. At each step of

the algo, we buy and sell. For example, we have 8 shares to buy and 2 shares to sale, we

buy each share with a quantity of 150% / 8 = 18.75% and sell up to 50% / 2 = 25% of

the portfolio. We are creating a changing portfolio containing long and short positions

at any given time.

In summary, we buy stocks with good Sharpe ratio through the sale of shares with

poor Sharpe ratio. Take a look at the attached backtest.

For more info on our end of day risk datasets, see attached description.

]]>Friday, February 10 2017, 10:30 AM – 5:30 PM [CST]

You will learn why the use of high frequency market data is necessary to be able to measure correctly the risk and rebalance your portfolio adequately. You will also learn how to build strategies to generate alpha. You will study how to build your own portfolio, create a strategy, backtest it, optimize it, and use vol forecasting with PortfolioEffect hft Python package.

Beginner knowledge of Python and finance, college level math, laptop with Anaconda2 installed

10:30 AM-11:00 AM Welcome

11:00 AM-11:30 AM Introduction to high frequency market data

11:30 AM-12:00 PM Intraday risk metrics

12:00 PM-12:30 PM Exercise-build intraday risk metrics on portfolio

12:30 PM-1:00 PM Backtesting portfolio and build your own strategies

1:00 PM-1:30 PM Lunch break

1:30 PM-2:00 PM Exercise on backtesting

2:00 PM-2:30 PM Vol forecasting

3:00 PM-3:30 PM Exercise on vol forecasting

3:30 PM-4:00 PM Portfolio optimization & Alpha generation

4:00 PM-5:00 PM Exercise-build your own optimization for alpha generation

5:00 PM-5:30PM Closing remarks

Andrey Kostin, PhD & Stephanie Toper

For any questions, email info@portfolioeffect.com

Registration: Click here, space is limited

]]>

Next workshop is in Chicago February 10th. Use discount EARLY to get $150 saving. Saving expire on January 10th.

Alpha Generation: Controlling Intraday Risk Profile with Python, Chicago, Feb 10th

]]>You will learn why the use of high frequency market data is necessary to be able to measure correctly the risk and rebalance your portfolio adequately. You will also study how to build your own portfolio, create a strategy, backtest it, optimize it, and use vol forecasting with PortfolioEffectHFT package available on CRAN.

Beginner knowledge of R and finance, college level math, laptop with RStudio installed

10:30 AM-11:00 AM Welcome

11:00 AM-11:30 AM Introduction to high frequency market data

11:30 AM-12:00 PM Intraday risk metrics

12:00 PM-12:30 PM Exercise-build intraday risk metrics on portfolio

12:30 PM-1:00 PM Backtesting portfolio and build your own strategies

1:00 PM-1:30 PM Lunch break

1:30 PM-2:00 PM Exercise on backtesting

2:00 PM-2:30 PM Vol forecasting

3:00 PM-3:30 PM Exercise

3:30 PM-4:00 PM Portfolio optimization

4:00 PM-5:00 PM Exercise Built your own optimization

5:00 PM-5:30PM Closing remarks

Registration: Click here, space is limited

For a student discount, please email info@portfolioeffect.com

]]>You will learn why the use of high frequency market data is necessary to be able to measure correctly the risk and rebalance your portfolio adequately. You will also study how to build your own portfolio, create a strategy, backtest it, optimize it, and use vol forecasting with PortfolioEffect hft package available on Anaconda.

Beginner knowledge of Python and finance, college level math, laptop with Anaconda2 installed

10:30 AM-11:00 AM Welcome

11:00 AM-11:30 AM Introduction to high frequency market data

11:30 AM-12:00 PM Intraday risk metrics

12:00 PM-12:30 PM Exercise-build intraday risk metrics on portfolio

12:30 PM-1:00 PM Backtesting portfolio and build your own strategies

1:00 PM-1:30 PM Lunch break

1:30 PM-2:00 PM Exercise on backtesting

2:00 PM-2:30 PM Vol forecasting

3:00 PM-3:30 PM Exercise

3:30 PM-4:00 PM Portfolio optimization

4:00 PM-5:00 PM Exercise Built your own optimization

5:00 PM-5:30PM Closing remarks

Registration: Click here, space is limited

For a student discount, please email info@portfolioeffect.com

]]>It is designed for high frequency market microstructure analysis and contains popular estimators for price variance, quarticity and noise.

https://cran.r-project.org/web/packages/PortfolioEffectEstim/

Or via downloads section:

https://www.portfolioeffect.com/docs/platform/quant/tools/r

http://www.mathworks.com/matlabcentral/fileexchange/55335-portfolioeffectestim-high-frequency-price-estimators—models-toolbox

Or via downloads section:

https://www.portfolioeffect.com/docs/platform/quant/tools/matlab

Package features key estimators for working with high frequency market data.

**Microstructure Noise:**

- Autocovariance Noise Variance
- Realized Noise Variance
- Unbiased Realized Noise Variance
- Noise-to-Signal Ratio

**Price Variance:**

- Two Series Realized Variance
- Multiple Series Realized Variance
- Modulated Realized Variance
- Jump Robust Modulated Realized Variance
- Uncertainty Zones Realized Variance
- Kernel Realized Variance (Bartlett, Cubic, 5th/6th/7th/8th-order, Epanichnikov, Parzen, Tukey-Hanning kernels)

**Price Quarticity:**

- Realized Quarticity
- Realized Quad-power Quarticity
- Realized Tri-power Quarticity
- Modulated Realized Quarticity

Use could provide your own high frequency market data or use our server-side high frequency prices for all major US equities.

To run an estimator using client-side data:

```
data(goog.data)
estimator<-estimator_create(priceData=goog.data)
rv.data = variance_tsrv(estimator)
util_plot2d(rv.data,title="Realized Variance of GOOG")
```

To run an estimator using server-side data:

```
estimator<-estimator_create(asset='GOOG',fromTime="2014-09-01", toTime="2014-09-14")
tsrv.data = variance_tsrv(estimator,K=2)
util_plot2d(tsrv.data,title="Two Series Realized Variance of GOOG")
```

More details in the package manual:

https://cran.r-project.org/web/packages/PortfolioEffectEstim/vignettes/PortfolioEffectEstim.pdf

API Reference:

https://cran.r-project.org/web/packages/PortfolioEffectEstim/PortfolioEffectEstim.pdf

Microstructure noise describes price deviation from its fundamental value induced by certain features of the market under consideration. Common sources of microstructure noise are:

- bid-ask bounce effect
- order arrival latency
- asymmetry of information
- discreteness of price changes

Noise makes high frequency estimates of some parameters (e.g. realized volatility) very unstable. The situation gets even worse for high order moments like kurtosis, which makes tail risk estimation using HF data very problematic. We will investigate how severe could be noise contamination for different stocks as the we move towards transactional frequencies.

Bid-ask bounce occurs when traders buy at ask prices and sell at bid prices. Their trades cause prices to bounce from bid to ask. These price changes reverse when traders arrive on the other side of the market.

One of the popular noise types used in the market microstructure literature is additive noise, which represents observable asset price as a sum of two independent components – a noise-free price with variance $\sigma$ and a noise term with variance $v$:

$$

p^*_t = p_t + \epsilon_t,

$$

- $p^*_t$ is a noise contaminated price
- $p_t$ is a noise-free price
- $\epsilon_t$ is an noise term.

Consequently when you sample $[0,T]$ in $N$ slices (i.e. $\delta=T/N$), the expectation of the quadratic variation, that should be an estimate of $\int_0^T \sigma_t^2 dt$ is:

$$U(N)=\mathbb{E}\left(\sum_{n=1}^{N} (p_{n\delta}+\epsilon_n – (p_{(n-1))\delta} +\epsilon_{n-1}))^2 \right)= \sum_{n=1}^{N} \mathbb{E}(p_{n\delta} – p_{(n-1))\delta} )^2 + \mathbb{E}(\epsilon_n -\epsilon_{n-1})^2$$

since the microstructure noise is assumed to be independent of the price, moroever it is centered and has a variance $v$ so $\mathbb{E}(\epsilon_n -\epsilon_{n-1})^2=2v$.

Our estimate is now:

$$U(N)\mathop{\rightarrow}_{N\longrightarrow +\infty}\int_0^T \sigma_t^2 dt+2Nv$$

It means that the estimate of the squared volatility increases linearly with the sampling rate. And technically this comes only from the microstructure noise (with variance $v$).

** PortfolioEffect’s market microstructure model deals with additive noise contamination in high frequency prices to allow for unbiased noise-free estimates of actual return distribution.**

We use PortfolioEffectHFT R package to build a simple buy-and-hold portfolio that spans 3 trading days and has 2 stock positions based on sample server-side price quotes.

```
require(PortfolioEffectHFT)
options(jupyter.plot_mimetypes = 'image/png',repr.plot.width=14, repr.plot.height=8)
require(scales)
# create portfolio
portfolio = portfolio_create("2015-01-07", "2015-01-09")
symbols = c("GOOG", "AAPL")
quantities = c(100, 600)
portfolio_addPosition(portfolio,symbols,quantities)
goog.weight.1s = position_weight(portfolio, "GOOG")
aapl.weight.1s = position_weight(portfolio, "AAPL")
weight.1s <- data.frame(symbol = c("GOOG", "AAPL"),
weight = c(goog.weight.1s[1, 2], aapl.weight.1s[1, 2]))
ggplot(data = weight.1s, aes(x = "" , y = weight, fill = symbol)) +
coord_polar(theta = "y") +
geom_bar(width = 1, stat = "identity", color='white') +
scale_fill_brewer(palette="Paired")+
geom_text(aes(y = weight/3 + c(0, cumsum(weight)[-length(weight)]),
label = paste(symbol, "\n", percent(weight/100))), size=6, fontface="bold", color='white')+
theme_minimal()+
# black border around pie slices
theme(axis.text.x=element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
panel.border = element_blank(),
panel.grid=element_blank(),
axis.ticks = element_blank(),
plot.title=element_text(size=20, face="bold"),
legend.position = "right"
)
```

Loading required package: PortfolioEffectHFT Loading required package: grid Loading required package: ggplot2 Loading required package: rJava

Welcome to PortfolioEffectHFT.

Loading required package: scales

```
# noisy variance
portfolio_settings(portfolio, noiseModel=F, inputSamplingInterval='1s')
goog.variance.noisy.1s = position_variance(portfolio, "GOOG")
aapl.variance.noisy.1s = position_variance(portfolio, "AAPL")
portfolio_settings(portfolio, noiseModel=F, inputSamplingInterval='10s')
goog.variance.noisy.10s = position_variance(portfolio, "GOOG")
aapl.variance.noisy.10s = position_variance(portfolio, "AAPL")
portfolio_settings(portfolio, noiseModel=F, inputSamplingInterval='1m')
goog.variance.noisy.60s = position_variance(portfolio, "GOOG")
aapl.variance.noisy.60s = position_variance(portfolio, "AAPL")
portfolio_settings(portfolio, noiseModel=F, inputSamplingInterval='5m')
goog.variance.noisy.300s = position_variance(portfolio, "GOOG")
aapl.variance.noisy.300s = position_variance(portfolio, "AAPL")
goog.values.variance.noisy = rbind(
data.frame(goog.variance.noisy.1s, legend="1 sec"),
data.frame(goog.variance.noisy.10s, legend="10 sec"),
data.frame(goog.variance.noisy.60s, legend="1 min"),
data.frame(goog.variance.noisy.300s, legend="5 min")
)
aapl.values.variance.noisy = rbind(
data.frame(aapl.variance.noisy.1s, legend="1 sec"),
data.frame(aapl.variance.noisy.10s, legend="10 sec"),
data.frame(aapl.variance.noisy.60s, legend="1 min"),
data.frame(aapl.variance.noisy.300s, legend="5 min")
)
# Noise-free Variance
portfolio_settings(portfolio, noiseModel=T, inputSamplingInterval='1s')
goog.variance.nf.1s = position_variance(portfolio, "GOOG")
aapl.variance.nf.1s = position_variance(portfolio, "AAPL")
portfolio_settings(portfolio, noiseModel=T, inputSamplingInterval='10s')
goog.variance.nf.10s = position_variance(portfolio, "GOOG")
aapl.variance.nf.10s = position_variance(portfolio, "AAPL")
portfolio_settings(portfolio, noiseModel=T, inputSamplingInterval='1m')
goog.variance.nf.60s = position_variance(portfolio, "GOOG")
aapl.variance.nf.60s = position_variance(portfolio, "AAPL")
portfolio_settings(portfolio, noiseModel=T, inputSamplingInterval='5m')
goog.variance.nf.300s = position_variance(portfolio, "GOOG")
aapl.variance.nf.300s = position_variance(portfolio, "AAPL")
goog.values.variance.nf = rbind(
data.frame(goog.variance.nf.1s, legend="1 sec"),
data.frame(goog.variance.nf.10s, legend="10 sec"),
data.frame(goog.variance.nf.60s, legend="1 min"),
data.frame(goog.variance.nf.300s, legend="5 min")
)
aapl.values.variance.nf = rbind(
data.frame(aapl.variance.nf.1s, legend="1 sec"),
data.frame(aapl.variance.nf.10s, legend="10 sec"),
data.frame(aapl.variance.nf.60s, legend="1 min"),
data.frame(aapl.variance.nf.300s, legend="5 min")
)
```

```
util_plot2df(value ~ time, data = goog.values.variance.noisy, title='GOOG - Variance (Noise Model - OFF)', subtitle = 'daily time scale, various frequencies',bw=T)
util_plot2df(value ~ time, data = goog.values.variance.nf, title='GOOG - Variance (Noise Model - ON)', subtitle = 'daily time scale, various frequencies', bw=T)
```

```
util_plot2df(value ~ time, data = aapl.values.variance.noisy, title='AAPL - Variance (Noise Model - OFF)', subtitle = 'daily time scale, various frequencies', bw=T)
util_plot2df(value ~ time, data = aapl.values.variance.nf, title='AAPL - Variance (Noise Model - ON)', subtitle = 'daily time scale, various frequencies', bw=T)
```

Noise levels at different frequences are commonly measured by a Noise-to-Signal (NTS) ratio, which relates variances of noise and noise-free price. NTS generally increases with frequency with NTS > 50% for some symbols at 1 second resolution.

$$

NTS_t = \frac{v^{2}_{t}}{\sigma^{2}_{t}}

$$

```
# Noise-to-signal ratio
goog.nts.1s = mean(goog.variance.noisy.1s[,2] - goog.variance.nf.1s [,2])/mean(goog.variance.nf.1s [,2])
goog.nts.10s = mean(goog.variance.noisy.10s[,2] - goog.variance.nf.10s [,2])/mean(goog.variance.nf.10s [,2])
goog.nts.60s = mean(goog.variance.noisy.60s[,2] - goog.variance.nf.60s [,2])/mean(goog.variance.nf.60s [,2])
goog.nts.300s = mean(goog.variance.noisy.300s[,2] - goog.variance.nf.300s [,2])/mean(goog.variance.nf.300s [,2])
aapl.nts.1s = mean(aapl.variance.noisy.1s[,2] - aapl.variance.nf.1s [,2])/mean(aapl.variance.nf.1s [,2])
aapl.nts.10s = mean(aapl.variance.noisy.10s[,2] - aapl.variance.nf.10s [,2])/mean(aapl.variance.nf.10s [,2])
aapl.nts.60s = mean(aapl.variance.noisy.60s[,2] - aapl.variance.nf.60s [,2])/mean(aapl.variance.nf.60s [,2])
aapl.nts.300s = mean(aapl.variance.noisy.300s[,2] - aapl.variance.nf.300s [,2])/mean(aapl.variance.nf.300s [,2])
frequencyName = c("1 sec", "10 sec", "1 min", "5 min")
goog.nts <- data.frame(time = factor(frequencyName, levels=frequencyName), nts = c(goog.nts.1s, goog.nts.10s, goog.nts.60s, goog.nts.300s))
aapl.nts <- data.frame(time = factor(frequencyName, levels=frequencyName), nts = c(aapl.nts.1s, aapl.nts.10s, aapl.nts.60s, aapl.nts.300s))
ggplot(data=goog.nts, aes(x=time, y=nts, fill=time)) + geom_bar(stat="identity") + xlab("Frequency") + ylab("NTS") + ggtitle(bquote(atop("GOOG - Noise-to-Signal Ratio", atop(italic("daily time scale, various frequencies"), "")))) + util_plotTheme(has.subtitle=T, bw=T) + scale_fill_brewer(palette="Blues") + theme(legend.position = "none")
ggplot(data=aapl.nts, aes(x=time, y=nts, fill=time)) + geom_bar(stat="identity") + xlab("Frequency") + ylab("NTS") + ggtitle(bquote(atop("AAPL - Noise-to-Signal Ratio", atop(italic("daily time scale, various frequencies"), "")))) + util_plotTheme(has.subtitle=T, bw=T) + scale_fill_brewer(palette="Blues") + theme(legend.position = "none")
```

Both strategies would employ a price moving average signal with a window of different calendar length to simulate position entry and exit with different holding period durations. Our trading portfolio would consist of a single *GOOG* position to keep matters simple.

First, we define a moving average method that receives a price vector and a window length for averaging.

```
# Create function of moving average
MA=function(x,order){
result=x
x1=c(0,x)
result[(order):NROW(x)]=(cumsum(x1)[-(1:(order))]-cumsum(x1)[-((NROW(x1)-order+1):NROW(x1))])/order
result[1:(order-1)]=cumsum(x[1:(order-1)])/(1:(order-1))
return(result-0.0000000001)
}
```

The high frequency trading strategy has a window length of 150 seconds, while the low frequency strategy uses 800 second window. When the stock price exceeds the N-second moving average, each strategy would buy 100 shares of the stock. If moving average goes above the current price while we are still in position, the strategy would issue a sell signal. Now that we defined our position holding rules, we can construct our trading portfolio for further analysis.

```
require(PortfolioEffectHFT)
symbol = "GOOG"
dateStart = "2014-10-13 09:30:00"
dateEnd = "2014-10-14 16:00:00"
highFrequencyPortfolio=portfolio_create(fromTime=dateStart,toTime=dateEnd)
lowFrequencyPortfolio=portfolio_create(fromTime=dateStart,toTime=dateEnd)
portfolio_addPosition(highFrequencyPortfolio,symbol,1)
price=position_price(highFrequencyPortfolio,symbol)
printTime=price[,1]
highFrequencyStrategy=array(0,dim=NROW(price))
highFrequencyStrategy[price[,"value"]>MA(price[,"value"],150)]=100
lowFrequencyStrategy=array(0,dim=NROW(price))
lowFrequencyStrategy[price[,"value"]>MA(price[,"value"],800)]=100
# Add position GOOG to portfolios
portfolio_addPosition(portfolio=highFrequencyPortfolio,symbol=symbol,quantity=highFrequencyStrategy,time=printTime)
portfolio_addPosition(lowFrequencyPortfolio,symbol=symbol,quantity=lowFrequencyStrategy,time=printTime)
```

Display general information about the portfolio at the end of a dataset

```
print(highFrequencyPortfolio)
```

```
PORTFOLIO SETTINGS
Portfolio metrics mode portfolio
Window length 1d
Time scale 1d
Holding periods only FALSE
Short sales mode lintner
Price jumps model moments
Microstructure noise model TRUE
Portfolio factor model sim
Density model GLD
Drift term enabled TRUE
Results sampling interval 1s
Input sampling interval none
Transaction cost per share 0
Transaction cost fixed 0
[0%.....10%.....20%.....30%.....40%.....50%.....60%.....70%.....80%.....90%.....100%] ( 8.86 sec )
POSITION SUMMARY
Quantity Weight (in %) Profit Return (in %) Value
100.00 100.00 -29139.56 -26.90 53794.00
Price
537.94
PORTFOLIO SUMMARY
Profit Return (in %) Value
Portfolio -29139.56 -26.9 53794
```

```
print(lowFrequencyPortfolio)
```

```
PORTFOLIO SETTINGS
Portfolio metrics mode portfolio
Window length 1d
Time scale 1d
Holding periods only FALSE
Short sales mode lintner
Price jumps model moments
Microstructure noise model TRUE
Portfolio factor model sim
Density model GLD
Drift term enabled TRUE
Results sampling interval 1s
Input sampling interval none
Transaction cost per share 0
Transaction cost fixed 0
[0%.....10%.....20%.....30%.....40%.....50%.....60%.....70%.....80%.....90%.....100%] ( 10.08 sec )
POSITION SUMMARY
Quantity Weight (in %) Profit Return (in %) Value
0.00 0.00 -13399.54 -12.40 0.00
Price
537.94
PORTFOLIO SUMMARY
Profit Return (in %) Value
Portfolio -13399.54 -12.4 0
```

Let’s plot corresponding holding periods for each intraday strategy.

```
plot1=util_ggplot(util_plot2d(position_quantity(highFrequencyPortfolio,symbol),title="High Frequency Portfolio Strategy",line_size=0.6))
plot2=util_ggplot(util_plot2d(position_quantity(lowFrequencyPortfolio,symbol),title="Low Frequency Portfolio Strategy",line_size=0.6))
util_multiplot(plot1,plot2,cols=1)
```

As you can see, our low frequency trading portfolio has on average a higher return variance that its high frequency counterpart and therefore turns to be a more risky investment. PortfolioEffect adjusts for market microstructure noise effects and other HF anomalies, which could otherwise cause severe bias in traditional variance estimates. The intraday bias could have particularly severe for the high frequency strategy that trades at intervals closer to the distances between actual stock market transactions.

```
util_plot2d(portfolio_variance(highFrequencyPortfolio),title="Variance, daily",Legend="HF Portfolio")+
util_line2d(portfolio_variance(lowFrequencyPortfolio),Legend="LF Portfolio")
```

Strategy Value-at-Risk displays similar behavior as does the return variance. The 95% VaR is a more balanced measure of overall risk than return variance, as it acounts for tail events using high order moments (skewness, kurtosis) of return distribution.

```
util_plot2d(portfolio_VaR(highFrequencyPortfolio,0.05),title="Value at Risk in %, daily (95% c.i.)",Legend="HF Portfolio")+
util_line2d(portfolio_VaR(lowFrequencyPortfolio,0.05),Legend="LF Portfolio")
```

We selected a classic Sharpe Ratio metric among many popular performance measures that are offered by the platform. Our high frequency trading strategy displays slightly higher Sharpe Ratio values during longer part of the day, though the difference is not very significant given a substantial overlap of holding periods in our example. However, you may easily extend this simple example to compare your own trading rules and portfolios that cover multiple assets.

```
util_plot2d(portfolio_sharpeRatio(highFrequencyPortfolio),title="Sharpe Ratio, daily",Legend="HF Portfolio")+
util_line2d(portfolio_sharpeRatio(lowFrequencyPortfolio),Legend="LF Portfolio")
```

]]>