iOS CoreText(二)

这篇主要讲利用coreText来实现图文混排和点击事件

图文混排思路

文字的绘制只需要知道文字的大小就够了,而图片的绘制不一样,需要知道图片的坐标,高度和宽度。在CoreText中,我们可以把插入的图片当做一个特殊的CTRun,通过delegate来设置图片的宽度和高度,这样就解决了图片的高度和宽度问题,但是CoreText不会自动的对图片进行绘制,因此需要我们自己找到图片的显示位置(原点坐标),然后自己进行绘制

具体实现

根据上一篇的 iOS CoreText(一)中的代码,我们需要在需要显示图片的地方,插入一个空白字符,然后设置CTRun的代理

- (void)drawTextAndImage:(CGContextRef)context size:(CGSize)size {
    NSMutableAttributedString *astring = _textString;
    //设置坐标系
    //设置字形的变换矩阵为不做图形变换
    CGContextSetTextMatrix(context, CGAffineTransformIdentity);
    //平移方法,将画布向上平移一个屏幕高度
    CGContextTranslateCTM(context, 0, size.height);
    //缩放方法,x轴缩放系数为1,则不变,y轴缩放系数为-1,则相当于以x轴为轴旋转180度
    CGContextScaleCTM(context, 1, -1);
    //这次的重点
    //设置CTRun代理
    CTRunDelegateCallbacks callBacks;
    memset(&callBacks, 0, sizeof(CTRunDelegateCallbacks));
    
    callBacks.version = kCTRunDelegateVersion1;
    callBacks.getAscent = ascentCallbacks;
    callBacks.getDescent = descentCallbacks;
    callBacks.getWidth = widthCallbacks;
    
    CTRunDelegateRef delegate = CTRunDelegateCreate(&callBacks, (void *)astring);
    
    //创建空白字符
    unichar placeHolder = 0xFFFC;
    NSString *placeHolderString = [NSString stringWithCharacters:&placeHolder length:1];
    NSMutableAttributedString *placeHolderAttributedString = [[NSMutableAttributedString alloc]initWithString:placeHolderString];
 
    NSDictionary *attributedDic = [NSDictionary dictionaryWithObjectsAndKeys:(__bridge id)delegate, kCTRunDelegateAttributeName,nil];
    [placeHolderAttributedString setAttributes:attributedDic range:NSMakeRange(0, 1)];
    CFRelease(delegate);
    
    //将图片插入
    [astring insertAttributedString:placeHolderAttributedString atIndex:astring.length/2];
   
    //创建path
    CGMutablePathRef path = CGPathCreateMutable();
    CGPathAddRect(path, NULL, self.bounds);
  
    //绘文字
    CTFramesetterRef frameRef = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)astring);
    CTFrameRef fref = CTFramesetterCreateFrame(frameRef, CFRangeMake(0, astring.length), path, NULL);
    CTFrameDraw(fref, context);
    
    //绘图
    UIImage *image = [UIImage imageNamed:@"tj_Image"];
    CGRect imageRect = [self calculateImageRect:fref];
    CGContextDrawImage(context, imageRect, image.CGImage);
    
    CFRelease(path);
    CFRelease(fref);
    CFRelease(frameRef);
}
#pragma mark ---CTRUN代理---
CGFloat ascentCallbacks (void *ref) {
    return 11;
}

CGFloat descentCallbacks (void *ref) {
    return 7;
}

CGFloat widthCallbacks (void *ref) {
    return 36;
}

和上次一样的我们就不讲了,从设置代理开始说起
先要设置代理的回调CTRunDelegateCallbacks callBacks;,包括了四个需要设置的属性,版本号、上边距、下边距和宽度,后面对应的是C的函数名,负责确定图片的宽度和高度。
设置完成后,穿件一个空白字符unichar placeHolder = 0xFFFC
将空白字符转换成NSString,再转换成NSMutableAttributedString
创建一个字典,key为kCTRunDelegateAttributeName,value为我们创建的delegate(__bridge为OC对象和CF对象之间的桥接),本质上addAttributes:就是add一个字典,这点要理解
然后将空白字符插入要显示的位置

 //绘图
//我们要显示的图片
    UIImage *image = [UIImage imageNamed:@"tj_Image"];
//自己定义的方法,为了得到图片的坐标和大小
    CGRect imageRect = [self calculateImageRect:fref];
//将图片绘制到指定的地方
    CGContextDrawImage(context, imageRect, image.CGImage);

我们现在主要来讲讲calculateImageRect这个方法

先来说说这个方法的思路:
我们先获取到所有的CTLine,然后遍历每个CTLine中的CTRun的,取出CTRun对应的Attributes字典,判断字典中是否有key为kCTRunDelegateAttributeName的value,如果有,就是我们插入的图片的位置。

- (CGRect)calculateImageRect:(CTFrameRef)frame {
    //先找CTLine的原点,再找CTRun的原点
    NSArray *allLine = (NSArray *)CTFrameGetLines(frame);
    NSInteger lineCount = [allLine count];
    //获取CTLine原点坐标
    CGPoint points[lineCount];
    CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), points);
    CGRect imageRect = CGRectMake(0, 0, 0, 0);
    for (int i = 0; i < lineCount; i++) {
        CTLineRef line = (__bridge CTLineRef)allLine[i];
        //获取所有的CTRun
        CFArrayRef allRun = CTLineGetGlyphRuns(line);
        CFIndex runCount = CFArrayGetCount(allRun);
        
        //获取line原点
        CGPoint lineOrigin = points[i];
        
        
        for (int j = 0; j < runCount; j++) {
            CTRunRef run = CFArrayGetValueAtIndex(allRun, j);
            NSDictionary *attributes = (NSDictionary *)CTRunGetAttributes(run);
            CTRunDelegateRef delegate = (__bridge CTRunDelegateRef)[attributes valueForKey:(id)kCTRunDelegateAttributeName];
            if (delegate == nil) {
                //暂时不用关注这部分代码,主要用于点击事件
                NSString *textClickString = [attributes valueForKey:@"textClick"];
                if (textClickString != nil) {
                    [textFrameArray addObject:[NSValue valueWithCGRect:[self getLocWith:frame line:line run:run origin:lineOrigin]]];
                }
                
                continue;
            }
           //获取图片的Rect
            imageRect = [self getLocWith:frame line:line run:run origin:lineOrigin];
        }
    }
    return imageRect;
}

对照上面的思路来看,代码不复杂

- (CGRect)getLocWith:(CTFrameRef)frame line:(CTLineRef)line run:(CTRunRef)run origin:(CGPoint)point {
    CGRect boundRect;
    CGFloat ascent = 0.0f;
    CGFloat descent = 0.0f;
    CGFloat width = CTRunGetTypographicBounds(run, CFRangeMake(0, 0), &ascent, &descent, NULL);
    boundRect.size.width = width;
    boundRect.size.height = ascent + descent;
    
    //获取x偏移量
    CGFloat xoffset = CTLineGetOffsetForStringIndex(line, CTRunGetStringRange(run).location, NULL);
    boundRect.origin.x = point.x + xoffset;
    boundRect.origin.y = point.y - descent;
    
    //获取BoundingBox
    CGPathRef path = CTFrameGetPath(frame);
    CGRect colRect = CGPathGetBoundingBox(path);

    return CGRectOffset(boundRect, colRect.origin.x, colRect.origin.y);
}
    CGFloat ascent = 0.0f;
    CGFloat descent = 0.0f;
    CGFloat width = CTRunGetTypographicBounds(run, CFRangeMake(0, 0), &ascent, &descent, NULL);
    boundRect.size.width = width;
    boundRect.size.height = ascent + descent;

这里获取的widthascentdescent,其实就是我们在CTRunDelegateCallbacks callBacks中设置的那几个函数

CGFloat xoffset = CTLineGetOffsetForStringIndex(line, CTRunGetStringRange(run).location, NULL);

这句代码就是获取该CTRun对于CTLine的x轴偏移,知道偏移量和CTLine的原点,我们就可以计算出图片的原点。

boundRect.origin.y = point.y - descent;

主要是为了让图片能居中显示(可以自己调节试试)
这样图片的Rect就获取到了

现在来说说点击事件

当我们为某段文字或者某张图片设置点击事件,主要利用了本质上addAttributes:就是add一个字典这一特性,这样我们就可以直接在字典中定义一个特殊的key,用来判断该CTRun是否是具有点击事件的CTRun

LJTextView *textView = [[LJTextView alloc] initWithFrame:CGRectMake(0, 64, width, height - 64)];
textView.backgroundColor = [UIColor whiteColor];
textView.textString = [[NSMutableAttributedString alloc]initWithString:@"123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123"];
    
NSDictionary *dic = @{@"textClick":@"click",NSBackgroundColorAttributeName:[UIColor redColor]};
[textView.textString addAttributes:dic range:NSMakeRange(24, 4)];

上面的代码我们就定义了已一个textClick的特殊key

NSString *textClickString = [attributes valueForKey:@"textClick"];
if (textClickString != nil) {
      [textFrameArray addObject:[NSValue valueWithCGRect:[self getLocWith:frame line:line run:run origin:lineOrigin]]];
}

获取图片Rect的时候,我们同时获取了拥有点击事件的CTRun的Rect并把他记录在textFrameArray数组中,方面后面进行判断

- (CGRect)convertRectToWindow:(CGRect)rect {
    return CGRectMake(rect.origin.x, self.bounds.size.height - rect.origin.y - rect.size.height, rect.size.width, rect.size.height);
}

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    UITouch *touch = touches.anyObject;
    CGPoint point = [touch locationInView:self];
    [textFrameArray enumerateObjectsUsingBlock:^(NSValue *value, NSUInteger idx, BOOL * _Nonnull stop) {
        CGRect rect = [value CGRectValue];
        CGRect convertRect = [self convertRectToWindow:rect];
        if (CGRectContainsPoint(convertRect, point)) {
            NSString *message = [NSString stringWithFormat:@"点击了%lu",(unsigned long)idx];
            UIAlertView *alert = [[UIAlertView alloc]initWithTitle:@"提示" message:message delegate:self cancelButtonTitle:@"确定" otherButtonTitles:nil];
            [alert show];
        }
    }];
}

convertRectToWindow主要用于转换坐标系,上面代码主要用于判断点击的位置在不在坐标范围内,在则响应,不在则不处理

其实CoreText并不复制,只要了解,后面就好在工作中运用了
下一篇我们介绍YYLable的实现机制

你可能感兴趣的:(iOS CoreText(二))